Winter 2023
Forum in Review
AI for the Public Sector
Digital Strategy:
Artificial Intelligence for the Public Sector
Russell WaldManaging Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
The world of artificial intelligence (AI) is evolving so rapidly that even experts cannot predict where it is going, Russell Wald informed the Forum. AI is defined as human intelligence that is enhanced by hardware and software to accelerate learning and improve decision-making. It is based on reverse engineering of the human brain’s neural networks and is driving the deep learning revolution. AI has a lot of promise, but also a lot of hype, Mr. Wald noted.
While an AI decision may be more accurate than a human’s choice, AI does not have the flexibility of human intelligence; for example, the ability to fly an airplane while considering what to have for dinner. AI mainly excels at formal logic, which allows it to sift through gargantuan data sets at remarkable speed. AI can sort information from thousands of websites or find the right chess move from hundreds of previous games. However, AI struggles to connect abstract logic with real-world meanings. AI enhances but does not replace human intelligence.
From its origins in 1956, when the term “artificial intelligence” was coined, AI was poised to transform the economy across sectors ranging from healthcare and finance to retail and education. The year 2022 saw significant breakthroughs for AI based on three key trends:
1. Greater availability of data
2. Increases in computing power
3. Improvements to algorithm design.
First, increasingly large amounts of data have fueled the ability for computers to learn. Second, better computational capacity (often termed “compute”) and compute capability have enabled researchers to build models, sometimes spanning billions of parameters. Third, basic innovations in algorithms are helping scientists to drive forward AI, such as the reinforcement learning techniques. And AI moved out of the laboratory and into daily life when the AI conversational chatbot ChatGPT garnered 1 million users within five days of being made publicly accessible; far faster than Twitter or any other social platform.
Three factors have hobbled AI adoption in the public sector: Lack of talent, lack of funding, and lack of compute capabilities.
AI can make government more effective. For example, formerly, the Veterans’ Administration took three years to adjudicate a claim; today this has been reduced to three months with AI’s help. AI can assist and expedite tax collection enforcement, and improved AI algorithms have reduced public sector hiring bias.
While AI has great promise for the public sector, fully 88% of the federal agencies that adopted AI failed to meet their reporting requirements. The causes: Lack of talent, as AI expertise is a limited resource; lack of funding; and lack of compute capabilities.
Mr. Wald noted that state governments have tremendous data repositories. And, while outside expertise from private-sector entities such as Deloitte, McKinsey and Scale AI can support state governments to exploit these data, the most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data and how it is most efficiently and appropriately applied. The leadership role of Chief Machine Learning Officer is critical for this enterprise, said Mr. Wald.
The most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data.
AI is augmenting, not replacing human intelligence. The impact of AI applications may depend on how much their human colleagues understand and agree with what AI does. People don’t embrace what they don’t understand. Therefore, training for existing workers is imperative. Developing processes that leverage AI in transparent and explainable ways is essential to create comfort between humans and their AI colleagues.
Mr. Wald noted that humans have an 75% efficacy rate and machines have a 97% efficacy rate. But he shared a colleague’s comment to illustrate a key point and address fears about human-versus-AI job competition: AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Data transparency and explainable algorithms are required to build trust in AI outcomes. Transparent data-collection methods enable the end user to understand why certain pieces of information are being collected and how they’re going to be used.
Machine learning does not yet replicate human intelligence but, in the interim, AI can provide augmentation to human intelligence. For example, while driver-less cars may be more than a decade in the future, currently, AI delivers improved sensors for accident prevention and will augment driver safety — but not replace the driver.
AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Tools such as OpenAI’s Dall-E 2 image generator and ChatGPT text generator (aka ChatGPT 3, its current version) are getting more sophisticated and are reaching the point where people have a hard time telling the difference between artificially rendered works and those created by humans. With such open-source AI generators, there are no rules of the road yet to guide use. When images can be readily manipulated, the truth becomes more difficult to detect from a fake. Helping constituents to recognize and question the authenticity of AI-generated images could become a challenge for state legislators.
Artificial intelligence requires vast amounts of computing power, data, and expertise to train and deploy the massive machine learning models behind the most advanced research. The Stanford HAI program is advocating for the development of a National Research Cloud (NRC) to provide academic and non-profit researchers with the compute power and government data sets needed for education and research. Such a collaborative approach would encourage significant AI advancements and also ensure the U.S. maintains its leadership and competitiveness on the global stage.
If the NRC does not happen, any state that creates an opportunity for an AI initiative could see an influx of talent and funding.
In conclusion, Mr. Wald assured the Forum that AI is here to stay, is being deployed across most industries, and is rapidly changing... with significant applications and implications for state legislators.
Discussion
Tom Finneran (Moderator)
How can states overcome the funding requirements and regulatory issues related to enabling AI? How can we protect against open-source exploitation of AI for bad ends?
Mr. Wald
The requirements for significant computing power, large data sets, and talented people can pose significant barriers. The National Research Cloud (NRC) could offer a collaborative solution to these challenges. However, if the NRC does not happen, any state that created an opportunity for an AI initiative could see an influx of talent and funding.
AI source code is still protected and held secret. And AI publications go through peer review before appearing in the established journals and have copyright protection. However, AI evolves so quickly that most advances are announced at conferences, where there is no peer review and no intellectual property protection.
Sen. Jonathan Dismang (AR)
AI could be defined as manipulating data to get an outcome. But state governments have data in silos, in databases that are not formatted alike. What are the key potential applications for AI in state government? One application we saw was the use of AI to assess behavioral health – relating prescriptions given to children to their behavioral outcomes. We found that we were overmedicating the children, but not producing any behavioral improvements.
Mr. Wald
AI is revelatory – it can see patterns that cannot be detected by humans. Today compute processes are better and cheaper and most data are digital, so data can be more readily manipulated by machines than by humans. Some of the potential uses are for enforcement; for example, to detect fraud or insider trading. Other applications can enhance benefits; for example, to allow better or more accurate distribution of monies due to beneficiaries.
Sen. Ron Kouchi (HI)
Computer chips are an essential element of the AI story, and this leads to some challenging situations; for example, the guidance chips in Russian drones that are being used in Ukraine are U.S.-made.
Mr. Wald
Chips are essential for AI and a “War of the Chips” is coming. Currently, the U.S., Taiwan, Korea, and the Netherlands hold the chip monopoly. The 2022 CHIPS and Science Act provides roughly $280 billion dollars in new funding to boost domestic research and manufacturing of semiconductors in the U.S. But China and Russia are both in the race.
For more information about our guest speaker’s organization, visit Stanford University's
Human-Centered Artificial Intelligence (HAI)
Presenter Biography
Managing Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence
Russell Wald leads the policy and society initiative at the Stanford Institute for Human-Centered AI (HAI), and advances the organization’s engagement with civil society and governments worldwide. As a part of HAI’s executive management team, Wald sets the strategic vision for policy and societal research, education, and outreach at HAI. He directs a dynamic team to equip civil society and policymakers with the knowledge and resources to take informed and meaningful actions on advancing AI with human-centered values.
He is the co-author of various publications on AI, including Building a National AI Research Resource (2021), Enhancing International Cooperation in AI Research: The Case for a Multilateral AI Research Institute (2022), The Centrality of Data and Compute for AI Innovation: A Blueprint for a National Research Cloud (2022, Notre Dame Journal of Emerging Technologies). Currently he is part of a HAI seed grant research project titled, Addicted by Design: An Investigation of How AI-fueled Digital Media Platforms Contribute to Addictive Consumption.
Wald has held various policy program and government relations positions at Stanford University for nearly a decade. He also served as special assistant to Amy Zegart and Ashton Carter at Stanford's Center for International Security and Cooperation (CISAC). In 2014, he co-designed and led the inaugural Stanford congressional boot camp, and has since created numerous tech policy boot camps, establishing a strong and effective tradition of educating policymakers at Stanford and enhancing the collaboration between governments and academic institutions.
Prior to his work at Stanford, he held numerous roles with the Los Angeles World Affairs Council. He is a Visiting Fellow with the National Security Institute at George Mason University, and a former Term Member with the Council on Foreign Relations and the Truman National Security Project. Wald is a graduate of UCLA.
Senate Presidents’ Forum
579 Broadway
Hastings-on-Hudson, NY 10706
914-693-1818 • info@senpf.com
Copyright © 2023 Senate Presidents' Forum. All rights reserved.
Winter 2023
Forum in Review
AI for the Public Sector
Digital Strategy:
Artificial Intelligence for
the Public Sector
Russell WaldManaging Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
The world of artificial intelligence (AI) is evolving so rapidly that even experts cannot predict where it is going, Russell Wald informed the Forum. AI is defined as human intelligence that is enhanced by hardware and software to accelerate learning and improve decision-making. It is based on reverse engineering of the human brain’s neural networks and is driving the deep learning revolution. AI has a lot of promise, but also a lot of hype, Mr. Wald noted.
While an AI decision may be more accurate than a human’s choice, AI does not have the flexibility of human intelligence; for example, the ability to fly an airplane while considering what to have for dinner. AI mainly excels at formal logic, which allows it to sift through gargantuan data sets at remarkable speed. AI can sort information from thousands of websites or find the right chess move from hundreds of previous games. However, AI struggles to connect abstract logic with real-world meanings. AI enhances but does not replace human intelligence.
From its origins in 1956, when the term “artificial intelligence” was coined, AI was poised to transform the economy across sectors ranging from healthcare and finance to retail and education. The year 2022 saw significant breakthroughs for AI based on three key trends:
1. Greater availability of data
2. Increases in computing power
3. Improvements to algorithm design.
First, increasingly large amounts of data have fueled the ability for computers to learn. Second, better computational capacity (often termed “compute”) and compute capability have enabled researchers to build models, sometimes spanning billions of parameters. Third, basic innovations in algorithms are helping scientists to drive forward AI, such as the reinforcement learning techniques. And AI moved out of the laboratory and into daily life when the AI conversational chatbot ChatGPT garnered 1 million users within five days of being made publicly accessible; far faster than Twitter or any other social platform.
Three factors have hobbled AI adoption in the public sector: Lack of talent, lack of funding, and lack of compute capabilities.
AI can make government more effective. For example, formerly, the Veterans’ Administration took three years to adjudicate a claim; today this has been reduced to three months with AI’s help. AI can assist and expedite tax collection enforcement, and improved AI algorithms have reduced public sector hiring bias.
While AI has great promise for the public sector, fully 88% of the federal agencies that adopted AI failed to meet their reporting requirements. The causes: Lack of talent, as AI expertise is a limited resource; lack of funding; and lack of compute capabilities.
Mr. Wald noted that state governments have tremendous data repositories. And, while outside expertise from private-sector entities such as Deloitte, McKinsey and Scale AI can support state governments to exploit these data, the most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data and how it is most efficiently and appropriately applied. The leadership role of Chief Machine Learning Officer is critical for this enterprise, said Mr. Wald.
The most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data.
AI is augmenting, not replacing human intelligence. The impact of AI applications may depend on how much their human colleagues understand and agree with what AI does. People don’t embrace what they don’t understand. Therefore, training for existing workers is imperative. Developing processes that leverage AI in transparent and explainable ways is essential to create comfort between humans and their AI colleagues.
Mr. Wald noted that humans have an 75% efficacy rate and machines have a 97% efficacy rate. But he shared a colleague’s comment to illustrate a key point and address fears about human-versus-AI job competition: AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Data transparency and explainable algorithms are required to build trust in AI outcomes. Transparent data-collection methods enable the end user to understand why certain pieces of information are being collected and how they’re going to be used.
Machine learning does not yet replicate human intelligence but, in the interim, AI can provide augmentation to human intelligence. For example, while driver-less cars may be more than a decade in the future, currently, AI delivers improved sensors for accident prevention and will augment driver safety — but not replace the driver.
AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Tools such as OpenAI’s Dall-E 2 image generator and ChatGPT text generator (aka ChatGPT 3, its current version) are getting more sophisticated and are reaching the point where people have a hard time telling the difference between artificially rendered works and those created by humans. With such open-source AI generators, there are no rules of the road yet to guide use. When images can be readily manipulated, the truth becomes more difficult to detect from a fake. Helping constituents to recognize and question the authenticity of AI-generated images could become a challenge for state legislators.
Artificial intelligence requires vast amounts of computing power, data, and expertise to train and deploy the massive machine learning models behind the most advanced research. The Stanford HAI program is advocating for the development of a National Research Cloud (NRC) to provide academic and non-profit researchers with the compute power and government data sets needed for education and research. Such a collaborative approach would encourage significant AI advancements and also ensure the U.S. maintains its leadership and competitiveness on the global stage.
If the NRC does not happen, any state that creates an opportunity for an AI initiative could see an influx of talent and funding.
In conclusion, Mr. Wald assured the Forum that AI is here to stay, is being deployed across most industries, and is rapidly changing... with significant applications and implications for state legislators.
Discussion
Tom Finneran (Moderator)
How can states overcome the funding requirements and regulatory issues related to enabling AI? How can we protect against open-source exploitation of AI for bad ends?
Mr. Wald
The requirements for significant computing power, large data sets, and talented people can pose significant barriers. The National Research Cloud (NRC) could offer a collaborative solution to these challenges. However, if the NRC does not happen, any state that created an opportunity for an AI initiative could see an influx of talent and funding.
AI source code is still protected and held secret. And AI publications go through peer review before appearing in the established journals and have copyright protection. However, AI evolves so quickly that most advances are announced at conferences, where there is no peer review and no intellectual property protection.
Sen. Jonathan Dismang (AR)
AI could be defined as manipulating data to get an outcome. But state governments have data in silos, in databases that are not formatted alike. What are the key potential applications for AI in state government? One application we saw was the use of AI to assess behavioral health – relating prescriptions given to children to their behavioral outcomes. We found that we were overmedicating the children, but not producing any behavioral improvements.
Mr. Wald
AI is revelatory – it can see patterns that cannot be detected by humans. Today compute processes are better and cheaper and most data are digital, so data can be more readily manipulated by machines than by humans. Some of the potential uses are for enforcement; for example, to detect fraud or insider trading. Other applications can enhance benefits; for example, to allow better or more accurate distribution of monies due to beneficiaries.
Sen. Ron Kouchi (HI)
Computer chips are an essential element of the AI story, and this leads to some challenging situations; for example, the guidance chips in Russian drones that are being used in Ukraine are U.S.-made.
Mr. Wald
Chips are essential for AI and a “War of the Chips” is coming. Currently, the U.S., Taiwan, Korea, and the Netherlands hold the chip monopoly. The 2022 CHIPS and Science Act provides roughly $280 billion dollars in new funding to boost domestic research and manufacturing of semiconductors in the U.S. But China and Russia are both in the race.
For more information about our guest speaker’s organization,
visit Stanford University's
Human-Centered Artificial Intelligence (HAI)
Presenter Biography
Managing Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence
Russell Wald leads the policy and society initiative at the Stanford Institute for Human-Centered AI (HAI), and advances the organization’s engagement with civil society and governments worldwide. As a part of HAI’s executive management team, Wald sets the strategic vision for policy and societal research, education, and outreach at HAI. He directs a dynamic team to equip civil society and policymakers with the knowledge and resources to take informed and meaningful actions on advancing AI with human-centered values.
He is the co-author of various publications on AI, including Building a National AI Research Resource (2021), Enhancing International Cooperation in AI Research: The Case for a Multilateral AI Research Institute (2022), The Centrality of Data and Compute for AI Innovation: A Blueprint for a National Research Cloud (2022, Notre Dame Journal of Emerging Technologies). Currently he is part of a HAI seed grant research project titled, Addicted by Design: An Investigation of How AI-fueled Digital Media Platforms Contribute to Addictive Consumption.
Wald has held various policy program and government relations positions at Stanford University for nearly a decade. He also served as special assistant to Amy Zegart and Ashton Carter at Stanford's Center for International Security and Cooperation (CISAC). In 2014, he co-designed and led the inaugural Stanford congressional boot camp, and has since created numerous tech policy boot camps, establishing a strong and effective tradition of educating policymakers at Stanford and enhancing the collaboration between governments and academic institutions.
Prior to his work at Stanford, he held numerous roles with the Los Angeles World Affairs Council. He is a Visiting Fellow with the National Security Institute at George Mason University, and a former Term Member with the Council on Foreign Relations and the Truman National Security Project. Wald is a graduate of UCLA.
CONTACT US
Senate Presidents’ Forum
579 Broadway
Hastings-on-Hudson, NY 10706
914-693-1818 • info@senpf.com
Copyright © 2022 Senate Presidents' Forum. All rights reserved.
Digital Strategy:
Artificial Intelligence for
the Public Sector
Russell WaldManaging Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Winter 2023 Forum in ReviewIntroductionPlato’s Allegory of the CaveHow to Leverage Social MediaAI for the Public SectorState of the State BudgetsBudget RoundtableNationalism Revived
The world of artificial intelligence (AI) is evolving so rapidly that even experts cannot predict where it is going, Russell Wald informed the Forum. AI is defined as human intelligence that is enhanced by hardware and software to accelerate learning and improve decision-making. It is based on reverse engineering of the human brain’s neural networks and is driving the deep learning revolution. AI has a lot of promise, but also a lot of hype, Mr. Wald noted.
While an AI decision may be more accurate than a human’s choice, AI does not have the flexibility of human intelligence; for example, the ability to fly an airplane while considering what to have for dinner. AI mainly excels at formal logic, which allows it to sift through gargantuan data sets at remarkable speed. AI can sort information from thousands of websites or find the right chess move from hundreds of previous games. However, AI struggles to connect abstract logic with real-world meanings. AI enhances but does not replace human intelligence.
From its origins in 1956, when the term “artificial intelligence” was coined, AI was poised to transform the economy across sectors ranging from healthcare and finance to retail and education. The year 2022 saw significant breakthroughs for AI based on three key trends:
1. Greater availability of data
2. Increases in computing power
3. Improvements to algorithm design.
First, increasingly large amounts of data have fueled the ability for computers to learn. Second, better computational capacity (often termed “compute”) and compute capability have enabled researchers to build models, sometimes spanning billions of parameters. Third, basic innovations in algorithms are helping scientists to drive forward AI, such as the reinforcement learning techniques. And AI moved out of the laboratory and into daily life when the AI conversational chatbot ChatGPT garnered 1 million users within five days of being made publicly accessible; far faster than Twitter or any other social platform.
Three factors have hobbled AI adoption in the public sector: Lack of talent, lack of funding, and lack of compute capabilities.
AI can make government more effective. For example, formerly, the Veterans’ Administration took three years to adjudicate a claim; today this has been reduced to three months with AI’s help. AI can assist and expedite tax collection enforcement, and improved AI algorithms have reduced public sector hiring bias.
While AI has great promise for the public sector, fully 88% of the federal agencies that adopted AI failed to meet their reporting requirements. The causes: Lack of talent, as AI expertise is a limited resource; lack of funding; and lack of compute capabilities.
Mr. Wald noted that state governments have tremendous data repositories. And, while outside expertise from private-sector entities such as Deloitte, McKinsey and Scale AI can support state governments to exploit these data, the most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data and how it is most efficiently and appropriately applied. The leadership role of Chief Machine Learning Officer is critical for this enterprise, said Mr. Wald.
The most effective AI platforms and tools are those managed by people within a given agency who are familiar with the sources and quality of the data.
AI is augmenting, not replacing human intelligence. The impact of AI applications may depend on how much their human colleagues understand and agree with what AI does. People don’t embrace what they don’t understand. Therefore, training for existing workers is imperative. Developing processes that leverage AI in transparent and explainable ways is essential to create comfort between humans and their AI colleagues.
Mr. Wald noted that humans have an 75% efficacy rate and machines have a 97% efficacy rate. But he shared a colleague’s comment to illustrate a key point and address fears about human-versus-AI job competition: AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Data transparency and explainable algorithms are required to build trust in AI outcomes. Transparent data-collection methods enable the end user to understand why certain pieces of information are being collected and how they’re going to be used.
Machine learning does not yet replicate human intelligence but, in the interim, AI can provide augmentation to human intelligence. For example, while driver-less cars may be more than a decade in the future, currently, AI delivers improved sensors for accident prevention and will augment driver safety — but not replace the driver.
AI is not going to replace a radiologist. But the radiologist who uses AI is going to replace the radiologist who doesn’t.
Tools such as OpenAI’s Dall-E 2 image generator and ChatGPT text generator (aka ChatGPT 3, its current version) are getting more sophisticated and are reaching the point where people have a hard time telling the difference between artificially rendered works and those created by humans. With such open-source AI generators, there are no rules of the road yet to guide use. When images can be readily manipulated, the truth becomes more difficult to detect from a fake. Helping constituents to recognize and question the authenticity of AI-generated images could become a challenge for state legislators.
Artificial intelligence requires vast amounts of computing power, data, and expertise to train and deploy the massive machine learning models behind the most advanced research. The Stanford HAI program is advocating for the development of a National Research Cloud (NRC) to provide academic and non-profit researchers with the compute power and government data sets needed for education and research. Such a collaborative approach would encourage significant AI advancements and also ensure the U.S. maintains its leadership and competitiveness on the global stage.
If the NRC does not happen, any state that creates an opportunity for an AI initiative could see an influx of talent and funding.
In conclusion, Mr. Wald assured the Forum that AI is here to stay, is being deployed across most industries, and is rapidly changing... with significant applications and implications for state legislators.
Discussion
Tom Finneran (Moderator)
How can states overcome the funding requirements and regulatory issues related to enabling AI? How can we protect against open-source exploitation of AI for bad ends?
Mr. Wald
The requirements for significant computing power, large data sets, and talented people can pose significant barriers. The National Research Cloud (NRC) could offer a collaborative solution to these challenges. However, if the NRC does not happen, any state that created an opportunity for an AI initiative could see an influx of talent and funding.
AI source code is still protected and held secret. And AI publications go through peer review before appearing in the established journals and have copyright protection. However, AI evolves so quickly that most advances are announced at conferences, where there is no peer review and no intellectual property protection.
Sen. Jonathan Dismang (AR)
AI could be defined as manipulating data to get an outcome. But state governments have data in silos, in databases that are not formatted alike. What are the key potential applications for AI in state government? One application we saw was the use of AI to assess behavioral health – relating prescriptions given to children to their behavioral outcomes. We found that we were overmedicating the children, but not producing any behavioral improvements.
Mr. Wald
AI is revelatory – it can see patterns that cannot be detected by humans. Today compute processes are better and cheaper and most data are digital, so data can be more readily manipulated by machines than by humans. Some of the potential uses are for enforcement; for example, to detect fraud or insider trading. Other applications can enhance benefits; for example, to allow better or more accurate distribution of monies due to beneficiaries.
Sen. Ron Kouchi (HI)
Computer chips are an essential element of the AI story, and this leads to some challenging situations; for example, the guidance chips in Russian drones that are being used in Ukraine are U.S.-made.
Mr. Wald
Chips are essential for AI and a “War of the Chips” is coming. Currently, the U.S., Taiwan, Korea, and the Netherlands hold the chip monopoly. The 2022 CHIPS and Science Act provides roughly $280 billion dollars in new funding to boost domestic research and manufacturing of semiconductors in the U.S. But China and Russia are both in the race.
For more information about our guest speaker’s organization,
visit Stanford University's
Human-Centered Artificial Intelligence (HAI)
Presenter Biography
Managing Director for Policy and Society
Stanford Institute for Human-Centered Artificial Intelligence
Russell Wald leads the policy and society initiative at the Stanford Institute for Human-Centered AI (HAI), and advances the organization’s engagement with civil society and governments worldwide. As a part of HAI’s executive management team, Wald sets the strategic vision for policy and societal research, education, and outreach at HAI. He directs a dynamic team to equip civil society and policymakers with the knowledge and resources to take informed and meaningful actions on advancing AI with human-centered values.
He is the co-author of various publications on AI, including Building a National AI Research Resource (2021), Enhancing International Cooperation in AI Research: The Case for a Multilateral AI Research Institute (2022), The Centrality of Data and Compute for AI Innovation: A Blueprint for a National Research Cloud (2022, Notre Dame Journal of Emerging Technologies). Currently he is part of a HAI seed grant research project titled, Addicted by Design: An Investigation of How AI-fueled Digital Media Platforms Contribute to Addictive Consumption.
Wald has held various policy program and government relations positions at Stanford University for nearly a decade. He also served as special assistant to Amy Zegart and Ashton Carter at Stanford's Center for International Security and Cooperation (CISAC). In 2014, he co-designed and led the inaugural Stanford congressional boot camp, and has since created numerous tech policy boot camps, establishing a strong and effective tradition of educating policymakers at Stanford and enhancing the collaboration between governments and academic institutions.
Prior to his work at Stanford, he held numerous roles with the Los Angeles World Affairs Council. He is a Visiting Fellow with the National Security Institute at George Mason University, and a former Term Member with the Council on Foreign Relations and the Truman National Security Project. Wald is a graduate of UCLA.
Senate Presidents’ Forum
579 Broadway
Hastings-on-Hudson, NY 10706
914-693-1818 • info@senpf.com
Copyright © 2022 Senate Presidents' Forum. All rights reserved.