Scholar Support Specialist

Berkeley, CA

Apply now →


The SERI ML Alignment Theory Scholars (MATS) program aims to find and train talented individuals for what we see as the world’s most urgent and talent-constrained problem: reducing risks from unaligned artificial intelligence. We believe that ambitious young researchers from a variety of backgrounds have the potential to contribute to the field of alignment research. We aim to provide the mentorship, curriculum, financial support, and community necessary to aid this transition. Please see our theory of change and website for more details.

Scholar Support Specialist

We are looking for a resourceful individual with strong interpersonal skills to help bolster the MATS program’s effectiveness through providing 1-1 cognitive support (i.e., pair debugging, check-ins, accountability, etc.) to our researchers-in-training. The ideal Scholar Support Specialist is able to aid scholars’ long-term professional development as well as help maximize their research effectiveness during the MATS Summer 2023 program. Strong candidates are particularly excited about problem-solving and empowering alignment researchers. Bonus criteria include possession of broad technical knowledge of the AI alignment and/or ML fields, as well as pedagogical knowledge or experience. This is a 3-month position with some possibility of extension.


Compensation will be $35-60/h, depending on experience and location. Must be willing to work from Berkeley, CA, though we may be able to offer flexibility for exceptional candidates. Office space provided. Start from mid-June. Estimated between 20-40 hours per week for 3 months (late June - early September) during Q3 2023 (potentially continuing beyond this).


A scholar support specialist’s primary role is to accelerate and enhance scholar development through the MATS program through offering personalized support to our researchers in training. Reports to the Scholar Support Lead. Specific responsibilities will change depending on cohort and program needs, but you can expect tasks like:

  • Meeting with scholars to provide 1-1 cognitive support, unblocking, goal planning, and advice (sometimes in the same session);
  • Performing regular check-ins with scholars via text, phone, virtually, or in-person;
  • Coaching and running workshops for scholars in meta-level techniques to accelerate research progress (e.g., backchaining, builder/breaker methodology, crux decomposition);
  • Directing scholars towards appropriate resources to help them flourish throughout the MATS program;
  • Assisting scholars with preparation for meetings with their mentor;
  • Relaying specific questions or concerns (with scholar permission) to the executive team when appropriate;
  • Identifying emergent trends in scholar needs or stress points over the course of the MATS program and flagging these to the Scholar Support team and/or the management;
  • Participating in regular Scholar Support team meetings and 1-1s with the team lead to troubleshoot any emergent systemic issues holding scholars back, and strategize solutions as the program progresses.


  • US work authorization required;
  • Located in or willing to move to Berkeley, CA;

We encourage you to apply even if you are not sure that you sufficiently meet all of the following criteria

  • Ability to solve highly abstract problems;
  • Introspective skills;
  • Well-developed “theory of mind” (people-modeling skills) and cognitive empathy;
  • Autonomy and proactivity, especially if/when friction points arise;
  • Can “zoom in” or “zoom out” in context as needed to achieve goals;
  • Eagerness to continue learning new problem-solving methods and tools;
  • An understanding of confidentiality, as well as the ability to maintain confidentiality where predefined boundaries exist;
  • Willingness to do some amount of navigation of emotionally heavy topics should they arise in scholar meetings (note we will have a separate Community Manager acting as the primary support for scholars’ emotional and mental health concerns);
  • Familiarity with AI safety-specific concerns (e.g., “doomerism,” “dual-use” research, “infohazards,” etc.)


Please apply via this form.

Time commitment: Full time

Applications due: 2023-06-16

  • Category AI
  • Created by Zane Kay
  • Submitted 26 May
  • Last updated 26 May