May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
Readings from the AI Safety Fundamentals: Governance course.
Aim: to give a basic overview of what is going on in longtermist AI governance. Audience: people who have limited familiarity with longtermist AI governance and want to understand it better. I don’t expect this to be helpful for those who already have familiarity with the field. ETA: Some people who were already quite familiar with the field have found this helpful. This post outlines the different kinds of work happening in longtermist AI governance. For each kind of work, I’ll explain it, give examples, sketch some stories for how it could have a positive impact, and list the actors I’m aware of who are currently working on it. Firstly, some definitions: AI governance means bringing about local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems.\[2\] Longtermist AI governance, in particular, is the subset of this work that is motivated by a concern for the very long-term impacts of AI. This overlaps significantly with work aiming to govern transformative AI (TAI). It’s worth noting that the field of longtermist AI governance is very small. I’d guess that there are around 60 people working in AI governance who are motivated by a concern for very long-term impacts.
Source:
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Expertise in China and its relations with the world might be critical in tackling some of the world’s most pressing problems. In particular, China’s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP. These considerations led us to publish a guide to improving China–Western coordination on global catastrophic risks and other key problems in 2018. Since then, we have seen an increase in the number of people exploring this area.
China is one of the most important countries developing and shaping advanced artificial intelligence (AI). The Chinese government’s spending on AI research and development is estimated to be on the same order of magnitude as that of the US government,2 and China’s AI research is prominent on the world stage and growing.
Because of the importance of AI from the perspective of improving the long-run trajectory of the world, we think relations between China and the US on AI could be among the most important aspects of their relationship. Insofar as the EU and/or UK influence advanced AI development through labs based in their countries or through their influence on global regulation, the state of understanding and coordination between European and Chinese actors on AI safety and governance could also be significant.
That, in short, is why we think working on AI safety and governance in China and/or building mutual understanding between Chinese and Western actors in these areas is likely to be one of the most promising China-related career paths. Below we provide more arguments and detailed information on this option.
Source:
https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/
Narrated for 80,000 Hours by TYPE III AUDIO.
People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss: Engineering technical levers to make AI coordination/regulation enforceable (through hardware engineering, software/ML engineering, and heat/electromagnetism-related engineering) Information security Forecasting AI development Technical standards development Grantmaking or management to get others to do the above well Advising on the above.
Original text:
https://forum.effectivealtruism.org/posts/BJtekdKrAufyKhBGw/ai-governance-needs-technical-work
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Introduction On October 7, 2022, the Biden administration announced a new export controls policy on artificial intelligence (AI) and semiconductor technologies to China. These new controls—a genuine landmark in U.S.-China relations—provide the complete picture after a partial disclosure in early September generated confusion. For weeks the Biden administration has been receiving criticism in many quarters for a new round of semiconductor export control restrictions, first disclosed on September 1. The restrictions block leading U.S. AI computer chip designers, such as Nvidia and AMD, from selling their high-end chips for AI and supercomputing to China. The criticism typically goes like this: China’s domestic AI chip design companies could not win customers in China because their chip designs could not compete with Nvidia and AMD on performance. Chinese firms could not catch up to Nvidia and AMD on performance because they did not have enough customers to benefit from economies of scale and network effects.
Source:
https://www.csis.org/analysis/choking-chinas-access-future-ai
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
This is a compilation of historical case studies that may help inform our intuitions about the future politics of AI (e.g. How will governments be involved in shaping the trajectory of AI? What forms of international coordination on AI are plausible, and under what conditions is each especially likely?)
Original text:
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
This is part one of the series ‘Transformative AI and Compute - A holistic approach’. You can find the sequence here and the summary here. This work was conducted as part of Stanford’s Existential Risks Initiative (SERI) at the Center for International Security and Cooperation, Stanford University. Mentored by Ashwin Acharya (Center for Security and Emerging Technology (CSET)) and Michael Andregg (Fathom Radiant). This post attempts to: 1. Introduce a simplified model of computing which serves as a foundational concept (Section 1). 2. Discuss the role of compute for AI systems (Section 2). In Section 2.3 you can find the updated compute plot you have been coming for. 3.
Original text:
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Advanced computer chips drive economic and scientific advancement as well as military capabilities. Complex supply chains produce these chips, and the global distribution of these chains and associated capabilities across nations have major implications for future technological competition and international security. However, supply chain complexity and opaqueness make it difficult to formulate policy. Avoiding unpredicted harms requires detailed understanding of the complete supply chain and national competitiveness across each element of that chain. To help policymakers understand global semiconductor supply chains, we have broken down these supply chains into their component elements and identified the features most relevant to policymakers because they either offer potential targets for technology controls or constrain the policy options available. A companion CSET issue brief titled “U.S. Semiconductor Exports to China: Current Policies and Trends” provides an overview of how export controls are currently applied to semiconductor supply chains. Companion CSET policy briefs titled “Securing Semiconductor Supply Chains” and “China’s Progress in Semiconductor Manufacturing Equipment” offer policy recommendations based on the analysis in this paper to sustain U.S. and allied advantages. The United States and its allies are global semiconductor supply chain leaders, while China lags.
Source:
https://cset.georgetown.edu/publication/the-semiconductor-supply-chain/
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Abstract:
The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI ‘talent’. Both are crucial to the future of AI activism and worthy of sustained attention.
Original Text:
https://arxiv.org/ftp/arxiv/papers/2001/2001.06528.pdf
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Written by Markus Anderljung (you might also want to reach out to Miles Brundage, Shahar Avin, or Saif Khan if you’re interested in these topics)
Compute is a very promising node for AI governance. Why? Powerful AI systems in the near term are likely to need massive amounts of compute, especially if the scaling hypothesis proves correct. Furthermore, compute seems more easily governable than other inputs to AI systems (talent, ideas, data), because it is more easily detectable (it requires energy, takes up physical space, etc) and because it’s supply chain is very concentrated (which enables monitoring and governance) (see Khan, Mann, Peterson 2021, Avin unpublished, and Brundage forthcoming).
Original Text:
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?
Source:
https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/
Crossposted from the Cold Takes Audio podcast.
We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions.
Source:
https://www.cold-takes.com/making-the-best-of-the-most-important-century/
Crossposted from the Cold Takes Audio podcast.
The range of application of artificial intelligence (AI) is vast, as is the potential for harm. Growing awareness of potential risks from AI systems has spurred action to address those risks, while eroding confidence in AI systems and the organizations that develop them. A 2019 study (1) found over 80 organizations that published and adopted “AI ethics principles″, and more have joined since. But the principles often leave a gap between the “what” and the “how” of trustworthy AI development. Such gaps have enabled questionable or ethically dubious behavior, which casts doubts on the trustworthiness of specific organizations, and the field more broadly. There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness through verifiable behavior. Below, we explore mechanisms (drawn from (2)) for creating an ecosystem where AI developers can earn trust - if they are trustworthy. Better assessment of developer trustworthiness could inform user choice, employee actions, investment decisions,legal recourse, and emerging governance regimes.
Original text:
https://arxiv.org/ftp/arxiv/papers/2112/2112.07773.pdf
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
In this article, I examine the challenge of ensuring coordination between AI developers to prevent catastrophic failure modes arising from the interactions of their systems. More specifically, I am interested in addressing bargaining failures as outlined in Jesse Clifton’s research agenda on Cooperation, Conflict & Transformative Artificial Intelligence (TAI) (2019) and Dafoe et al.’s Open Problems in Cooperative AI (2020).
First, I set out the general problem of bargaining failure and why bargaining problems might persist even for aligned superintelligent agents. Then, I argue for why developers might be in a good position to address the issue. I use a toy model to analyze whether we should expect them to do so by default. I deepen this analysis by comparing the merit and likelihood of different coordinated solutions. Finally, I suggest directions for interventions and future work.
The main goal of this article is to encourage and enable future work. To do so, I sketch the full path from problem to potential interventions. This large scope comes at the cost of depth of analysis. The models I use are primarily intended to illustrate how a particular question along this path can be tackled rather than to arrive at robust conclusions. At some point, I might revisit parts of this article to bolster the analysis in later sections.
Original text:
https://longtermrisk.org/coordination-challenges-for-preventing-ai-conflict/
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
Much has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful decisions from just a minority of influential decision makers—perhaps just a single actor with good intentions—may be enough to cause catastrophe. For example, some researchers argue, if just one organization deploys highly capable, goal-pursuing, misaligned AI—or if many businesses (but a small portion of all businesses) deploy somewhat capable, goal-pursuing, misaligned AI—humanity could be permanently disempowered. The above would not be very worrying if we could rest assured that no actors capable of these harmful actions would take them. However, especially in the context of AI safety, several factors are arguably likely to incentivize some actors to take harmful deployment actions: Misjudgment: Assessing the consequences of AI deployment may be difficult (as it is now, especially given the nature of AI risk arguments), so some organizations could easily get it wrong—concluding that an AI system is safe or beneficial when it is not. “Winner-take-all” competition: If the first organization(s) to deploy advanced AI is expected to get large gains, while leaving competitors with nothing, competitors would be highly incentivized to cut corners in order to be first—they would have less to lose.
Original text:
https://www.agisafetyfundamentals.com/governance-blog/global-vulnerability
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
AI governance concerns how humanity can best navigate the transition to a world with advanced AI systems. It relates to how decisions are made about AI, and what institutions and arrangements would help those decisions to be made well. I believe advances in AI are likely to be among the most impactful global developments in the coming decades, and that AI governance will become among the most important global issue areas. AI governance is a new field and is relatively neglected. I’ll explain here how I think about this as a cause area and my perspective on how best to pursue positive impact in this space. The value of investing in this field can be appreciated whether one is primarily concerned with contemporary policy challenges or long-term risks and opportunities (“longtermism”); this piece is primarily aimed at a longtermist perspective. Differing from some other longtermist work on AI, I emphasize the importance of also preparing for more conventional scenarios of AI development.
Contemporary Policy Challenges AI systems are increasingly being deployed in important domains: for many kinds of surveillance; by authoritarian governments to shape online discourse; for autonomous weapons systems; for cyber tools and autonomous cyber capabilities; to aid and make consequential decisions such as for employment, loans, and criminal sentencing; in advertising; in education and testing; in self-driving cars and navigation; in social media. Society and policy makers are rapidly trying to catch up, to adapt, to create norms and policies to guide these new areas. We see this scramble in contemporary international tax law, competition/antitrust policy, innovation policy, and national security motivated controls on trade and investment. To understand and advise contemporary policymaking, one needs to develop expertise in specific policy areas (such as antitrust/competition policy or international security) as well as in the relevant technical aspects of AI.
Original text:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
PASTA: Process for Automating Scientific and Technological Advancement.
Source:
https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/
Crossposted from the Cold Takes Audio podcast.
This chapter argues that artificial intelligence is beginning to emerge as a general purpose technology. Exploring historical examples of general purpose technologies, such as electricity and the digital computer, could help us to anticipate and think clearly about its future impact. One lesson from history is that general purpose technologies typically lead to broad economic, military, and political transformations. Another lesson is that these transformations typically unfold very gradually, and in a staggered fashion, due to various frictions and barriers to impact. I go on to argue that artificial intelligence could also constitute a revolutionary technology. If it ultimately supplants human labor in most domains, then it would likely catalyze a period of unusually profound change. The closest analogues to this period in world history would be the Neolithic Revolution and the Industrial Revolution.
Original Text:
https://docs.google.com/document/d/1I13_0o3kUe1AVQNfevOF9sHpc4mCQkuFDxOXFj_4g-I/
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
What the best available forecasting methods say about when we can expect transformative AI - and why there’s no “expert field” for this topic.
Source:
https://www.cold-takes.com/where-ai-forecasting-stands-today/
Crossposted from the Cold Takes Audio podcast.
I think that the development of human level AI in my lifetime is quite plausible; I would give it more than a 1-in-4 chance. In this post I want to briefly discuss what I see as the most important impacts of AI. I think these impacts are the heavy hitters by a solid margin; each of them seems like a big deal, and I think there is a big gap to #4. Growth will accelerate, probably very significantly. Growth rates will likely rise by at least an order of magnitude, and probably further, until we run into severe resource constraints. Just as the last 200 years have experienced more change than 10,000 BCE to 0 BCE, we are likely to see periods of 4 years in the future that experience more change than the last 200. Human wages will fall, probably very far. When humans work, they will probably be improving other humans’ lives (for example, in domains where we intrinsically value service by humans) rather than by contributing to overall economic productivity. The great majority of humans will probably not work. Hopefully humans will remain relatively rich in absolute terms.
Original text:
https://paulfchristiano.medium.com/three-impacts-of-machine-intelligence-6285c8d85376
Narrated for AGI Safety Fundamentals by TYPE III AUDIO.
The long view of economic history says we’re in the midst of a huge, unsustainable acceleration. What happens next?
Source:
https://www.cold-takes.com/this-cant-go-on/
Crossposted from the Cold Takes Audio podcast.