Open date: December 21st, 2021
Last review date: Tuesday, Jan 18, 2022 at 11:59pm (Pacific Time)
Applications received after this date will be reviewed by the search committee if the position has not yet been filled.
Final date: Monday, Jan 31, 2022 at 11:59pm (Pacific Time)
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled.
Advances in artificial intelligence (AI) technologies have led to remarkable scientific achievements and humanitarian applications, including improved medical diagnoses and more effective disaster relief efforts. At the same time, there is a growing awareness of the significant safety, ethical, and societal challenges stemming from misuse and unintended consequences of AI systems. Risks related to artificial intelligence take many forms including privacy, security, safety, fairness, equity, explainability, polarization, disinformation, and the distribution of power. How we understand and manage these risks throughout the AI lifecycle will determine whether the full upside potential of these technologies can be realized and whether we can prevent significant and disproportionate harms.
The Berkeley Institute for Data Science (BIDS) is seeking a creative and driven postdoctoral researcher for an Independent Postdoctoral Fellowship in Responsible AI. This researcher should bring a strong background in ethical and responsible practice in the field of Artificial Intelligence (AI), and in performing data-intensive research. The position requires the Fellow to create a research project that answers questions related to the ethical and responsible use and development of AI systems and machine learning technologies. Research topics might include (but are not limited to) automation and job displacement, algorithmic bias, explainable AI, and other concerns of inequality and distributive justice globally. We are especially interested in developing methods for evaluating AI systems with respect to emerging standards in government regulations about privacy, bias, fairness, and explainability. Ideally the work will have the potential to affect policy.
The Data Science and Responsible AI Postdoctoral Fellowship is a full-time two-year position at BIDS. It is made possible by funding from Accenture Applied Intelligence.
The Fellowship holder will be expected to produce novel research results communicated in academic publications and to lead a workshop or seminar series on the topic of the research; other possible contributions include open-source scientific software, curated datasets, and white papers and blog articles on data science and responsible AI.
In defining and carrying out the independent, self-directed research project, the Fellow will work under the guidance and mentorship of an advisory team made up of BIDS Faculty Affiliates and research staff including Professor Rediet Abebe and Professor Stuart Russell (Electrical Engineering and Computer Sciences), Professor Tolani Britton (Graduate School of Education) and Dr. Jessica Newman (AI Security Initiative).
The position will also benefit from collaboration with Accenture Applied Intelligence. Accenture supports BIDS’ research and educational objectives in data science with current foci on environment and energy, ethical AI, and social justice. The Fellow will be able to leverage this relationship to fully harness the data landscape and expertise available.
In addition, as part of the BIDS research community, the Fellow will have the opportunity to learn from individuals engaged in research software development, computational reproducibility, and data-intensive research from a wide range of application domains.
• Define and perform independent research to increase the understanding of ethical, political, and societal concerns embedded in the development and deployment of artificial intelligence and machine learning technologies
• Identify research-based solutions to improve theory and practice in ethical and responsible AI
• Help to establish transparent and open best practices useful for performing data-intensive research in responsible/ethical AI, as well as in general use.
• Develop a workshop or seminar series for BIDS faculty, researchers, and collaborators and the larger UC Berkeley responsible AI community.
• Give talks and presentations to a variety of audiences including the University of California community, Accenture Applied Intelligence leaders and researchers, and globally at relevant conferences.
• Produce novel research results communicated in academic publications; other possible contributions include open-source scientific software, curated datasets, and white papers and blog articles on data science and responsible AI.
• Update advisors, as well as collaborators at Accenture Applied Intelligence and other Accenture stakeholders, regularly on research progress through meetings.
• PhD (or equivalent international degree) or enrolled in a PhD (or equivalent international degree) program.
• PhD (or equivalent international degree).
• No more than four years of post-degree research experience by start date.
• PhD in a technical or social-science discipline such as, computer science, political economy, information science, or science, technology & society studies, or related field.
● Experience conducting data intensive technical, social science, and/or policy research, particularly related to ethical and responsible AI, social aspects of technology, mechanism design, or related fields.
● Ability to synthesize insights from multiple disciplines to tackle interdisciplinary projects.
● Experience in research ethics of machine learning and data science.
● Firm understanding of the utilization of large amounts of data and the building of reproducible pipelines to answer research questions.
● Experience in the design of data collection, statistical analysis, and data visualization strategies.
● Capable of identifying, communicating, and solving problems when integrating heterogeneous data.
● Extensive experience communicating data analysis to a wide range of skill levels and audiences.
• Program management experience
• Fluency in R and/or Python
• Demonstrated ability to collaborate with diverse stakeholders from government, civil society, and industry.
In accordance with the BIDS ideals, we are especially interested in postdoctoral researchers with a history of working with open science and transparency in work. We find that those with ability to implement relevant data science infrastructure, pipelines, software, and tools while performing research activities are particularly successful, becoming well integrated in our efforts to achieve open and reproducible work.
Curriculum Vitae - Your most recently updated C.V.
Statement of Accomplishments - 1-2 pages. Include contribution to scientific research papers, scientific software, science communication, science communication, datasets created, workshops, and public code.
Research Proposal - 1-2 pages. This proposal is not necessarily a commitment to what you will work on in your project, but serves a way for the application committee to assess your ability to plan a research project. We understand that your research project may change and evolve as you learn more about data available to you. You may use hypothetical datasets, but they should be realistically achievable with current technologies. Also, please include how you imagine leveraging the expertise of advisors.
Statement of Contributions to Diversity - 1 page or less. Diversity contributions documented in the application file will be used to evaluate applicants.
- 3 required (contact information only)
Help contact: firstname.lastname@example.org
Diversity, equity, inclusion, and belonging are core values at UC Berkeley. Our excellence can only be fully realized by faculty, students, and academic and non-academic staff who share our commitment to these values. Successful candidates for our academic positions will demonstrate evidence of a commitment to advancing equity, inclusion, and belonging.
The University of California, Berkeley is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy see: http://policy.ucop.edu/doc/4000376/NondiscrimAffirmAct
In searches when letters of reference are required all letters will be treated as confidential per University of California policy and California state law. Please refer potential referees, including when letters are provided via a third party (i.e., dossier service or career center), to the UC Berkeley statement of confidentiality (http://apo.berkeley.edu/ucb-confidentiality-policy) prior to submitting their letter.
As a condition of employment, you will be required to comply with the University of California SARS-CoV-2 (COVID-19) Vaccination Program Policy https://policy.ucop.edu/doc/5000695/SARS-CoV-2_Covid-19. All Covered Individuals under the policy must provide proof of Full Vaccination or, if applicable, submit a request for Exception (based on Medical Exemption, Disability, and/or Religious Objection) or Deferral (based on pregnancy) no later than the applicable deadline. For new University of California employees, the applicable deadline is eight weeks after their first date of employment. (Capitalized terms in this paragraph are defined in the policy.)