School of Philosophy, Australian National University
|Job category||Postdoc or similar / Fixed term|
|AOS||moral philosophy, especially formal ethics, political philosophy, decision, rational choice, social choice, and game theory, philosophy of science, philosophy of cognitive science and neuroscience, philosophy of language, formal epistemology|
Metaphysics and Epistemology
Philosophy of Science
|Organization's reference number||527917|
|Location||Canberra, Australian Capital Territory, Australia|
|Start date||Mid 2019|
Job no: 527917
Classification: Academic Level B
We are seeking world-class scholars to join a team of computer scientists, philosophers, and social scientists, to lead the world in moral AI. We will identify where AI can be maximally socially beneficial, remove foundational obstacles in moral AI’s path, and design algorithmic decision-making systems that reliably make defensible choices.
The ANU is launching a major new project on Humanising Machine Intelligence, uniting philosophers, computer scientists, and social scientists in the pursuit of a more ethical future for AI and Machine Learning.
Machine intelligence is already used in innumerable applications that, while not explicitly morally-loaded, have clear and profound social implications, from facial recognition, to the distribution of online attention. It is also used to support decisions that have explicit moral dimensions, for example about how to allocate welfare resources, and whom to grant bail or parole. And the application of MI in fully autonomous decision-making systems (robotic and otherwise) is picking up pace. Self-driving vehicles, autonomous weapons systems, and companion robots are the first wave of such systems; many more are on the way. Many companies and governments are also heavily invested in developing more general, multipurpose forms of AI. All of these autonomous systems must be able to make morally-loaded decisions by themselves.
In each of these fields inadequate attention to ethics in the design of MI systems will predictably have negative social consequences, some of which could be catastrophic. The goal of the HMI project is to forestall those risks, and help to realise the tremendous social benefits promised by MI. The project has three components: (1) Discovery: formulate the design problem by identifying the social risks and opportunities of widespread reliance on MI. (2) Foundations: identify and answer the fundamental theoretical questions on which progress towards ethical MI depends. (3) Design: develop ethical algorithms and broader MI systems in partnership with industry and government.
The HMI project chief investigators are: Associate Professors Seth Lazar (Project Leader), Colin Klein and Katie Steele (Philosophy), Professors Marcus Hutter, Sylvie Thiébaux, Bob Williamson and Lexing Xie (Computer Science), Dr. Jenny Davis (Sociology), Associate Professor Idione Meneghel (Economics), and Professor Toni Erskine (Political Science).
We are looking for up to eight talented researchers to help us humanise machine intelligence. Our primary criterion is demonstrated research excellence in a discipline area relevant to the project, and the clear potential to be research leaders in their disciplines and in the field of moral AI. An interdisciplinary background is not required, but successful applicants will be ready and equipped to engage with scholars from other disciplines, and are expected to work actively with scholars from at least two of the project’s discipline areas.
Successful applicants will help us design the next generation of more ethical MI systems, in part through publishing internationally influential research in the leading peer-reviewed venues (as suited to their discipline). We expect them to go on from the ANU to leading positions in academia and industry. As well as conducting research at the highest level, they will help build the HMI community at ANU and globally, through convening a regular seminar series and international workshops.
Three of the new research positions will be based in the School of Philosophy. Within this discipline area, we strongly encourage applications from people with PhDs in philosophy, with a background working in any relevant subfield, such as, but not restricted to: moral philosophy, especially formal ethics, political philosophy, decision, rational choice, social choice, and game theory, philosophy of science, philosophy of cognitive science and neuroscience, philosophy of language, formal epistemology. Prior experience working on machine intelligence is advantageous but not required.
Though these positions will be housed in the School of Philosophy, we strongly encourage anyone who meets the selection criteria to apply, regardless of disciplinary background.
Candidates who meet the selection criteria for this position may also submit applications to the other postdoctoral positions advertised here
For more information visit hmi.anu.edu.au
For all enquiries please contact Seth Lazar, Project Lead, Humanising Machine Intelligence Grand Challenge Program, E: Seth.Lazar@anu.edu.au.
ANU values diversity and inclusion and is committed to providing equal employment opportunities to those of all backgrounds and identities. For more information about staff equity at ANU, visit https://services.anu.edu.au/human-resources/respect-inclusion
|How to apply|
In order to apply for this role please make sure that you upload the following documents; A statement addressing the selection criteria A current curriculum vitae (CV) A research statement, outlining your research to date, and your plans for the coming 1-2 years A writing sample Four names of people to write letters of recommendation.
|Web address to apply||http://jobs.anu.edu.au/cw/en/job/527917/research-f...|
|Email to apply|
|Hard deadline||January 31, 2019, 11:59pm +10:00|
|Web address for more information||http://jobs.anu.edu.au/cw/en/job/527917/research-f...|
|Contact name||Seth Lazar|
|Time created||December 20, 2018, 2:54pm EST|
|Scheduled expiry date||January 31, 2019, 11:59pm +10:00|
|Last updated||December 20, 2018, 2:54pm EST|
|Last update notification||
There are no notifications for this ad.
Save the ad using the "save" button below to receive notifications of significant updates.
|Job Market Calendar||
This institution has indicated that the position advertised will not follow the APA's recommended job market calendar. An explanation, if provided, appears below.