Home Philanthropy Foundations Seek to Advance AI for Good, Protect Against Threats

Foundations Seek to Advance AI for Good, Protect Against Threats

0
52
Foundations Seek to Advance AI for Good, Protect Against Threats

By KAY DERVISHI of The Chronicle of Philanthropy Chronicle of Philanthropy

While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists – including long-established foundations and tech billionaires – have been responding with an uptick in grants.

Much of the philanthropy is focused on what is known as technology for good or “ethical AI,” which explores how to solve or mitigate the harmful effects of artificial-intelligence systems. Some scientists believe AI can be used to predict climate disasters and discover new drugs to save lives. Others are warning that the large language models could soon upend white-collar professions, fuel misinformation, and threaten national security.

What philanthropy can do to influence the trajectory of AI is starting to emerge. Billionaires who earned their fortunes in technology are more likely to support projects and institutions that emphasize the positive outcomes of AI, while foundations not endowed with tech money have tended to focus more on AI’s dangers.

For example, former Google CEO Eric Schmidt and wife, Wendy, have committed hundreds of millions of dollars to artificial-intelligence grantmaking programs housed at Schmidt Futures to “accelerate the next global scientific revolution.” In addition to committing $125 million to advance research into AI, last year the philanthropic venture announced a $148 million program to help postdoctoral fellows apply AI to science, technology, engineering, and mathematics.

Also in the AI enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of a few philanthropies that has made artificial intelligence and data science an explicit grantmaking priority. In 2021, the foundation committed $40 million to help nonprofits use artificial intelligence and data to advance “their work to protect the planet, foster economic prosperity, ensure healthy communities,” according to a news release from the foundation. McGovern also has an internal team of AI experts who work to help nonprofits use the technology to improve their programs.

“I am an incredible optimist about how these tools are going to improve our capacity to deliver on human welfare,” says Vilas Dhar, president of Patrick J. McGovern Foundation. “What I think philanthropy needs to do, and civil society writ large, is to make sure we realize that promise and opportunity — to make sure these technologies don’t merely become one more profit-making sector of our economy but rather are invested in furthering human equity.” 

Salesforce is also interested in helping nonprofits use AI. The software company announced last month that it will award $2 million to education, workforce, and climate organizations “to advance the equitable and ethical use of trusted AI.”

Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another big donor who believes AI can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve that goal. He is betting AI can positively transform areas like health care (“giving everyone a medical assistant”) and education (“giving everyone a tutor”), he told the New York Times in May.

The enthusiasm for AI solutions among tech billionaires is not uniform, however. EBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which is making grants to nonprofits using the technology for scientific innovation as well as those trying to protect data privacy and advocate for regulation.

“One of the things that we’re trying really hard to think about is how do you have good AI regulation that is both sensitive to the type of innovation that needs to happen in this space but also sensitive to the public accountability systems,” says Anamitra Deb, managing director at the Omidyar Network.

Grantmakers that hold a more skeptical or negative perspective on AI are also not a uniform group; however, they tend to be foundations unaffiliated with the tech industry.

The Ford, MacArthur, and Rockefeller foundations number among several grantmakers funding nonprofits examining the harmful effects of AI.

For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted pivotal research on racial and gender bias from facial-recognition tools – which persuaded Amazon, IBM, and other companies to pull back on the technology in 2020 – have received sizable grants from them and other big, established foundations.

Gebru launched the Distributed Artificial Intelligence Research Institute in 2021 to research AI’s harmful effects on marginalized groups “free from Big Tech’s pervasive influence.” The institute raised $3.7 million in initial funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur, and Open Society foundations are financial supporters of the Chronicle.)

Buolamwini is continuing research on and advocacy against artificial-intelligence and facial-recognition technology through her Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller foundations as well as from the Alfred P. Sloan and Mozilla foundations.

“These are all people and organizations that I think have really had a profound impact on the AI field itself but also really caught the attention of policymakers as well,” says Eric Sears, who oversees MacArthur’s grants related to artificial intelligence. The Ford Foundation also launched a Disability x Tech Fund through Borealis Philanthropy, which is supporting efforts to fight bias against people with disabilities in algorithms and artificial intelligence.

There are also AI skeptics among the tech elite awarding grants. Tesla CEO Elon Musk has warned AI could result in “civilizational destruction.” In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent “existential risk” from AI, and spearheaded a recent letter calling for a pause on AI development. Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has provided majority support to the Center for AI Safety, which also recently warned about the “risk of extinction” associated with AI.

A significant portion of foundation giving on AI is also directed at universities studying ethical questions. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvard’s Berkman Klein Center, received $26 million from 2017 to 2022 from Luminate (the Omidyar Group), Reid Hoffman, Knight Foundation, and the William and Flora Hewlett Foundation. (Hewlett is a financial supporter of the Chronicle.)

The goal, according to a May 2022 report, was “to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice.” One university funding effort comes from the Kavli Foundation, which in 2021 committed $1.5 million a year for five years to two new centers focused on scientific ethics – with artificial intelligence as one priority area – at the University of California at Berkeley and the University of Cambridge. The Knight Foundation announced in May it will spend $30 million to create a new ethical technology institute at Georgetown University to inform policymakers.

Although hundreds of millions of philanthropic dollars have been committed to ethical AI efforts, influencing tech companies and governments remains a massive challenge.

“Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, the Goliath-sized AI companies, the Goliath-sized regulators and policymakers that can actually take a crack at this,” says Deb of the Omidyar Network.

Even with those obstacles, foundation leaders, researchers, and advocates largely agree that philanthropy can – and should – shape the future of AI.

“The industry is so dominant in shaping not only the scope of development of AI systems in the academic space, they’re shaping the field of research,” says Sarah Myers West, managing director of the AI Now Institute. “And as policymakers are looking to really hold these companies accountable, it’s key to have funders step in and provide support to the organizations on the front lines to ensure that the broader public interest is accounted for.”

_____

This article was provided to The Associated Press by the Chronicle of Philanthropy. Kay Dervishi is a staff writer at the Chronicle. Email: kay.dervishi@philanthropy.com. The AP and the Chronicle are solely responsible for this content. They receive support from the Lilly Endowment for coverage of philanthropy and nonprofits. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

Pictured at top: Eric Schmidt, co-founder of Schmidt Futures, listens on Capitol Hill in Washington on Feb. 23, 2021, during a hearing on emerging technologies and their impact on national security. (AP Photo | Susan Walsh, File)

Published by The Business Journal, Youngstown, Ohio.



Credit:Source link

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here