<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/xhtml; charset=utf-8">
</head>
<body><div style="font-family: sans-serif;"><div class="markdown" style="white-space: normal;">
<p dir="auto">The stakes for getting the governance of AI right have never seemed higher—this post-doc at the University of Oxford is an opportunity to contribute much-needed research. Read more about the our new Oxford Martin AI Governance Initiative here: <a href="https://is.gd/IDpQwm" style="color: #3983C4;">https://is.gd/IDpQwm</a>.</p>
<h2>Postdoctoral Researcher, Oxford Martin AI Governance Initiative</h2>
<h3>Oxford Martin School, University of Oxford</h3>
<ul>
<li><em>Grade 7: £36,024 - £44,263 per annum.</em></li>
<li><em>Fixed-term, for three years</em></li>
<li><em>The closing date for applications is <strong>12.00 noon on Monday 23 October 2023.</strong></em></li>
</ul>
<p dir="auto">More details and applications here: <a href="https://is.gd/RaBkWU" style="color: #3983C4;">https://is.gd/RaBkWU</a></p>
<p dir="auto">The Oxford Martin School is seeking one or two Postdoctoral Researchers who hold, or are close to completion, of, a Ph.D./D.Phil in subject such as political science, computer science, engineering, law or policy to join the recently formed Oxford Martin AI Governance Initiative (AIG Oxford).</p>
<p dir="auto">AIG Oxford seeks to improve the impact of AI on global societal outcomes through impactful research that is rigorously grounded in the social and computational sciences, decision-maker education campaigns, and training the next generations of technology governance leader and will focus on the governance of AI from both technical and policy perspectives.<br>
The Initiative’s research addresses topics such as:</p>
<ul>
<li>Frontier AI Regulation: What form should it take — domestically and internationally?</li>
<li>Technical Governance: What machine learning, computing hardware, and cryptographic approaches can facilitate governance including treaty compliance and regulatory oversight?</li>
<li>International Governance: What international norms and institutions - standards setting, monitoring, collaborative research, arms control - can mitigate risks across jurisdictions?</li>
<li>AI Auditing Regimes: How should model access decisions be made, and what institutions should make them?</li>
<li>AI Industry Cooperation: How can AI firms cooperate for the public benefit?</li>
</ul>
<p dir="auto">Kindest<br>
Mike</p>
<hr style="border: 0; height: 1px; background: #333; background-image: linear-gradient(to right, #ccc, #333, #ccc);">
<p dir="auto">Michael A Osborne:<br>
Professor of Machine Learning,<br>
University of Oxford;<br>
<a href="http://www.robots.ox.ac.uk/~mosb" style="color: #3983C4;">http://www.robots.ox.ac.uk/~mosb</a>.</p>
</div>
</div>
</body>
</html>