lunes, 3 de octubre de 2016

Partnership on Artificial Intelligence to Benefit People and Society

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.


Press Releases

September 28, 2016 NEW YORK —  IBM, DeepMind,/Google,  Microsoft, Amazon, and Facebook today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI).

The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society. Together, the organization’s members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology. It does not intend to lobby government or other policymaking bodies.

The organization’s founding members will each contribute financial and research resources to the partnership and will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts. There will be equal representation of corporate and non-corporate members on the board of this new organization. The Partnership is in discussions with professional and scientific organizations, such as the Association for the Advancement of Artificial Intelligence (AAAI), as well as non-profit research groups including the Allen Institute for Artificial Intelligence (AI2), and anticipates announcements regarding additional participants in the near future.

AI technologies hold tremendous potential to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. Through rigorous research, the development of best practices, and an open and transparent dialogue, the founding members of the Partnership on AI hope to maximize this potential and ensure it benefits as many people as possible.


Francesca Rossi is a research scientist at the IBM T.J. Watson Research Centre and a professor of computer science at the University of Padova, Italy.

Francesca’s research interest focuses on artificial intelligence, specifically constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues surrounding the development and behavior of AI systems, in particular for decision support systems for group decision making. A prolific author, Francesca has published over 170 scientific articles in both journals and conference proceedings as well as co-authoring A Short Introduction to Preferences: Between AI and Social Choice. She has edited 17 volumes, including conference proceedings, collections of contributions, special issues of journals, and The Handbook of Constraint Programming.

Francesca is both a fellow of the European Association for Artificial Intelligence (EurAI fellow) and also a 2015 fellow of the Radcliffe Institute for Advanced Study at Harvard University. A prominent figure in the Association for the Advancement of Artificial Intelligence (AAAI), at which she is a fellow, she has formerly served as an executive councilor of AAAI and currently co-chairs the association’s committee on AI and ethics. Francesca is an active voice in the AI community, serving as Associate Editor in Chief of the Journal of Artificial Intelligence Research (JAIR) and as a member of the editorial boards of Constraints, Artificial Intelligence, Annals of Mathematics and Artificial Intelligence (AMAI), and Knowledge and Information Systems(KAIS). She is also a member of the scientific advisory board of the Future of Life Institute, sits on the executive committee of the Institute of Electrical and Electronics Engineers (IEEE)’s global initiative on ethical considerations on the development of autonomous and intelligent systems, and belongs to the World Economic Forum Council on AI and robotics.

A recognized authority on the future of AI and AI ethics, Francesca has been interviewed widely by publications including the Wall Street Journal, theWashington Post, Motherboard, Science, The Economist, CNBC, Eurovision, Corriere della Sera, and Repubblica, and has also delivered three TEDx talks on these topics.

Mustafa Suleyman is co-founder and Head of Applied AI at DeepMind, where he is responsible for the application of DeepMind’s technology to real-world problems, as part of DeepMind’s commitment to use intelligence to make the world a better place. In February 2016 he launched DeepMind Health, which builds clinician-led technology in the NHS. Mustafa was Chief Product Officer before DeepMind was bought in 2014 by Google in their largest European acquisition to date. At 19, Mustafa dropped out of Oxford University to help set up a telephone counselling service, building it to become one of the largest mental health support services of its kind in the UK, and then worked as policy officer for then Mayor of London, Ken Livingstone. He went on to help start Reos Partners, a consultancy with seven offices across four continents specializing in designing and facilitating large-scale multi-stakeholder ‘Change Labs’ aimed at navigating complex problems. As a skilled negotiator and facilitator Mustafa has worked across the world for a wide range of clients such as the UN, the Dutch Government, and WWF.

Greg Corrado is a senior scientist at Google Research, and a co-founder of the Google Brain Team. He works at the nexus of artificial intelligence, computational neuroscience, and scalable machine learning, and has published in fields ranging from behavioral economics, to particle physics, to deep learning. In his time at Google he has worked to put AI directly into the hands of users via products like RankBrain and SmartReply, and into the hands of developers via opensource software releases like TensorFlow and word2vec. He currently leads several research efforts in advanced applications of machine learning, ranging from natural human communication to expanded healthcare availability. Before coming to Google, he worked at IBM Research on neuromorphic silicon devices and large scale neural simulations. He did his graduate studies in both Neuroscience and Computer Science at Stanford University, and his undergraduate work in Physics at Princeton University.

Eric Horvitz is technical fellow at Microsoft, where he serves as director of the Microsoft Research lab at Redmond. His research contributions span theoretical and practical challenges with computing systems that learn from data and that can perceive, reason, and decide. His efforts have helped to bring multiple systems and services into the world, including innovations in transportation, healthcare, aerospace, ecommerce, online services, and operating systems. He has been elected fellow of the National Academy of Engineering (NAE), the Association for the Advancement of AI (AAAI), the American Association for the Advancement of Science (AAAS), and the American Academy of Arts and Sciences. He received the Feigenbaum Prize, the ACM-AAAI Allen Newell Award, and the ACM ICMI Sustained Achievement Award for foundational research contributions in AI. He was inducted into the CHI Academy for advances in human-computer collaboration. He has served as president of AAAI, chair of the AAAS Section on Computing, and on advisory committees for the National Institutes of Health, the National Science Foundation, the Computer Science and Telecommunications Board (CSTB), DARPA, and the President’s Council of Advisors on Science and Technology. He received his PhD and MD degrees from Stanford University. More information can be found at

Yann is the Director of AI Research at Facebook since December 2013, and Silver Professor at New York University on a part-time basis, mainly affiliated with the NYU Center for Data Science, and the Courant Institute of Mathematical Science.

Yann received the EE Diploma from Ecole Supérieure d’Ingénieurs en Electrotechnique et Electronique (ESIEE Paris), and a PhD in CS from Université Pierre et Marie Curie (Paris). After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, New Jersey. Yann became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he was the founding director of the NYU Center for Data Science.

Yann is the co-director of the Neural Computation and Adaptive Perception Program of CIFAR, and co-lead of the Moore-Sloan Data Science Environments for NYU. He received the 2014 IEEE Neural Network Pioneer Award.


Ralf is Director of Machine Learning at Amazon and Managing Director of the Amazon Development Center Germany. His team works on problems scalable and resource-aware machine learning, probabilistic learning algorithms (including forecasting), linking structured content, and computer vision. In 2011, he worked at Facebook leading the Unified Ranking and Allocation team. From 2000 – 2011, he worked at Microsoft Research and was co-leading the Applied Games and Online Services and Advertising group which engaged in research at the intersection of machine learning and computer games. Ralf was Research Fellow of the Darwin College Cambridge from 2000 – 2003. He has a diploma degree in Computer Science (1997) and a PhD in Statistics (2000). Ralf’s research interests include Bayesian inference and decision making, reinforcement learning, computer games, kernel methods, and statistical learning theory. He is one of the inventors of the Drivatars system in the Forza Motorsport series as well as the TrueSkill ranking and matchmaking system in Xbox 360 Live. He also co-invented the adPredictor click-prediction technology.


Over the past few years, there has been a surge in real-world applications of AI with the rollout of technologies that use advances in learning, perception, and natural language. Today, there are vibrant ongoing discussions about how to maximize the value of these new applications and services, and about the potential societal influences of the technologies, including issues around ethics, economics, privacy, transparency, bias and inclusiveness, and trustworthiness of the technologies. The founding research scientists care about these issues too, and see the opportunity for ongoing cross-industry discussion and the development of best practices to help gain the most from AI technologies for the benefit of people and society.

This group foresees great societal benefits and opportunities ahead, but we also understand that as with every new technology there will be concerns and confusion associated with new applications and competencies, and we look forward to working together on these important issues.

We intend to come together to address these important issues, including ethics, safety, transparency, privacy, biases, and fairness.


Amazon, DeepMind/Google, Facebook, IBM, and Microsoft have been developing AI-related technologies for years. Beyond their work on core R&D efforts, research scientists at these companies have been thinking about and discussing the potential societal impact of AI systems and how potential concerns might be addressed. Conversations have occurred at workshops, conferences, and smaller meetings. Earlier this year, several research scientists at the companies kicked off informal discussions about the possibility of bringing their companies together to form a non-profit organization charged with exploring and developing best practices. Today, we are pleased to announce those discussions have culminated in the formation of the Partnership on AI.


The goals are to ensure that applications of AI are beneficial to people and society. We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be used to help humanity address important global challenges. This organization seeks to ensure that our work fulfills these expectations. To do this, the organization will study the potential societal impact of AI systems, and develop and share best practices. We will also create working groups for different sectors, for example healthcare and transportation, allowing us to conduct research on the specific AI applications in these different sectors of the economy. We will also develop educational resources and host open forums to widely disseminate information about the latest topics in the field and support an ongoing public discussion about the technology.


We believe that by taking a multi-party stakeholder approach to identifying and addressing challenges and opportunities in an open and inclusive manner, we can have the greatest benefit and positive impact for the users of AI technologies. While the Partnership on AI was founded by five major IT companies, the organization will be overseen and directed by a diverse board that balances members from the founding companies with leaders in academia, policy, law, and representatives from the non-profit sector. By bringing together these different groups, we will also seek to bring open dialogue internationally, bringing parties from around the world to discuss these topics. A key operating principle is that we will share our work openly with the public and encourage their participation. The actions of the Partnership, including much of its discussions, meetings, results, and guidance, will be made publicly available.


The partnership is guided by a board which will guide the activities of the Partnership. The board, which is yet to be fully appointed, is comprised of seats for representatives from each of the five founding companies, and will be extended to include an equal number of seats for non-company representatives. Day-to-day operations will be overseen by an executive director who will work closely with the board of directors. Conferences, meetings, panels, projects, and working groups will be commissioned by the board and conducted by the executive staff.


Each founding company will have a representative on the board. The current representatives are Ralf Herbrich from Amazon, Mustafa Suleyman from DeepMind/Google, Yann LeCun from Facebook, Francesca Rossi from IBM, and Eric Horvitz from Microsoft.


To meet its goals, the organization anticipates it will host discussions, commission studies, write and distribute reports on critical topics, and seek to develop and share best practices and standards for industry. We will conduct outreach with the public and across the industry on topics related to advancing better understanding of AI systems and the potential applications and implications of this technology as they arise.


We are very excited to work with anyone who is interested in joining our effort. This is a collaborative and multi-stakeholder organization and we want people with an interest in AI from across all fields to take part. If you want to take part in some way then please email us


To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.


The regular engagement of experts across multiple disciplines (including but not limited to psychology, philosophy, economics, finance, sociology, public policy, and law) to discuss and provide guidance on emerging issues related to the impact of AI on society.

The design, execution, and financial support of objective third-party studies on best practices for the ethics, safety, fairness, inclusiveness, trust, and robustness for AI research, applications, and services.

The engagement of AI users and developers, as well as representatives of industry sectors that may be impacted by AI (such as
  • healthcare, 
  • financial services, 
  • transportation, 
  • commerce, 
  • manufacturing, 
  • telecommunications, and 
  • media) 
to support best practices in the research, development, and use of AI technology within specific domains.

The development of informational materials on the current and future likely trajectories of research and development in core AI and related disciplines.


We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.

We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.

The Partnership on AI shares the following tenets:
  1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
  2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
  3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
  4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
  5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
  6. We will work to maximize the benefits and address the potential challenges of AI technologies, by:
    1. Working to protect the privacy and security of individuals.
    2. Striving to understand and respect the interests of all parties that may be impacted by AI advances.
    3. Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.
    4. Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
    5. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
  7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
  8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.

In support of the mission to benefit people and society, the Partnership on AI intends to conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.

© 2016 Partnership on Artificial Intelligence. All Rights Reserved.

No hay comentarios:

Publicar un comentario