Home page About PhilJobs:JFP

Research Fellow (EXPIRED)

CAIS Philosophy Fellowship, Center for AI Safety not BA-granting

Job category Postdoc or similar / Fixed term
AOS Open
AOC Open
Workload Full time
Vacancies 8
Location San Francisco, California, United States
Start date January 2023
Job description

The Center for AI Safety (CAIS) invites applicants to the CAIS Philosophy Fellowship from January to August 2023. The Center for AI Safety is a non-profit organization committed to reducing the risks from advanced artificial intelligence systems. AI safety is a growing field that is naturally interdisciplinary, and historically, philosophers such as Nick BostromPeter Railton, and David Chalmers have been instrumental in developing the field. The importance of philosophers in AI safety is also being recognized by organizations such as the Kavli Center for Ethics, Science, and the Public (UC Berkeley), the Center for Human-Compatible AI (UC Berkeley), the Centre for the Governance of AI,  the Center on Long-Term Risk, the Global Priorities Institute (University of Oxford), and the Future of Humanity Institute (University of Oxford). All of these organizations have expressed an interest in including philosophers experienced in AI safety on their teams. 

Previous experience with AI or AI safety is not required. Our onboarding process has been prepared assuming fellows do not have such experience.

As a philosopher in this fellowship, you will utilize your philosophical and analytical skills to clarify problems in AI safety and contribute to novel solutions. 

We will help you shape and execute your own research projects on a topic that relates to AI safety. Some examples of potential research avenues include, but are not limited to:

  • Implementing Moral Decision-Making: Should we align advanced AI systems with human values, human intentions, or social processes? How can we incorporate multiple stakeholders and moral uncertainty into an AI’s decision-making in practice? How can we build systems that are more likely to behave ethically in the face of a rapidly changing world?

  • Risks, Dynamics, Strategies: How could advanced AI systems pose existential risks? What processes might shape the behavior of advanced AI systems? How could the development and proliferation of an AI system go awry? What are strategies to address these risks?

  • Tensions in Designing AI Systems: What are the inherent tensions surrounding the development of AI and how will these tensions be resolved? What are advantages and disadvantages of (de)centralized AI systems? How do competitive pressures undermine AI systems that are power averse or altruistic? For agents tasked with pursuing a broad set of goals, how can one avoid incentivizing agents to develop power-seeking tendencies?

  • Criticism of Existing Literature: Are there substantial flaws in the existing concepts, arguments, strategies regarding AI existential risk? Are there any risks that have been overlooked in the existing literature?

Fellows will receive funding to travel to San Francisco and spend seven months working on an original research project. Over the course of the fellowship researchers will have the opportunity to attend seminars, guest lectures, reading groups, social events, and be engaged in a community of 8-15 fellows.

Qualifications:

  • Philosophy Ph.D. student, or

  • Graduate of a philosophy Ph.D. program (professors are encouraged to apply)

Qualities We’re Looking For:

  • Exceptional research abilities

  • Demonstrated philosophical rigor

  • Self-motivation

  • Willingness to learn more about technical subjects (however, no ML experience is required)

Benefits:

  • $80,000 total compensation, composed of a $60,000 grant and a $20,000 housing stipend

  • Covered student fees

  • Full-time research opportunities at CAIS for top-performing fellows

  • Connections to other institutions post-fellowship

  • Regular guest lectures (confirmed talks from Peter Railton, Shelly Kagan, L.A. Paul, as well as AI professors/researchers at Cambridge, Berkeley, and DeepMind)

  • Covered cost of travel

  • Free lunch and dinner daily

Application Process:

Applicants will need to complete a written application that includes a writing sample, a personal statement, as well as any research or publications they may have. A select number of applicants will then be interviewed remotely. Selected candidates will receive funding to travel to San Francisco from the 18th to the 20th of November for a visiting weekend.

Dates:

  • October: Applications close

  • November 18: Visiting weekend for invited applicants

  • January 9: Program begins*

  • August 4: Program ends*

*Dates are flexible based on candidates and their availability.

For more information, visit our website: philosophy.safe.ai

We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.

How to apply
Application type Online
Web address to apply https://bit.ly/3AVc8Lh
Hard deadline October 7, 2022, 11:59pm PST
Contact
Web address for more information https://philosophy.safe.ai/
Contact name General Contact Email
Contact email
Bookkeeping
Time created August 23, 2022, 9:25pm UTC
Scheduled expiry date October 7, 2022, 11:59pm PST
Expired on October 9, 2022, 12:46am PST
Last updated December 9, 2022, 3:00pm UTC
Last update notification
In an effort to optimize the application deadline insofar as possible to fit with the philosophy job market, we're extending our submission deadline to October 7.
September 10, 2022, 3:14pm UTC

Save the ad using the "save" button below to receive notifications of significant updates.
Job Market Calendar This institution has confirmed that the position advertised will follow the APA's recommended job market calendar.
Save