FIRST ON FOX: President Donald Trump’s push to establish "America’s global AI dominance" could run into friction from an unlikely source: the "effective altruism" movement, a small but influential group that has a darker outlook on artificial intelligence.

Trump signed an executive order earlier this year titled, "Removing Barriers to American Leadership in Artificial Intelligence." This week he met with top technology industry leaders, including Mark Zuckerburg, Bill Gates and others, for meetings at the White House in which AI loomed large in the discussions. However, not all the industry’s leaders share the president’s vision for American AI dominance. 

Jason Matheny, a former senior Biden official who currently serves as the CEO of the RAND Corporation, is a leader in the effective altruism movement, which, among other priorities, seeks to regulate the development of artificial intelligence with the goal of reducing its risks. 

Effective altruism is a philanthropic social movement where proponents claim to be aiming to maximize the good they can do in the world and give to what they calculate are the most effective charities and interventions. Part of this movement includes powerful donors from across many sectors, including technology, where funds are poured into fighting against what the group sees as existential threats, including artificial intelligence

NEW AI APPS HELP RENTAL DRIVERS AVOID FAKE DAMAGE FEES

Some in the movement have also pledged to give away a portion of their income, while others have argued about the morality of earning as much money as possible in order to give it away.

A former Defense Department official familiar with the industry’s leaders told Fox News Digital that since a 2017 speech at an effective altruism forum in which he laid out his vision, Matheny has "been very deliberate about inserting personnel who share his AI-doomerism worldview" into government and government contractor roles. 

"Since then, he has made good on every single one of his calls to action to explicitly infiltrate think tanks, in-government decision makers, and trusted government contractors with this effective altruism, AI kind of doomsayer philosophy," the official continued. 

A RAND Corporation spokesperson pushed back against this label and said that Matheny "believes a wide range of views and backgrounds are essential to analyzing and informing sound public policy. His interest is in encouraging talented young people to embrace public service."

The spokesperson added that AI being an "existential threat" is "not the lens" through which the company approaches AI, but said, "Our researchers are taking a broad look at the many ways AI is and will impact society – including both opportunities and threats."

In his 2017 speech, Matheny discussed his vision of influencing the government from the inside and outside to advance effective altruistic goals. 

"The work that I've done at IARPA (Intelligence Advanced Research Projects Activity) has convinced me that there's a lot of low-hanging fruit within government positions that we should be picking as effective altruists. There are many different roles that effective altruists can have within government organizations," Matheny told an effective altruism forum in 2017, before going on to explain how even "fairly junior positions" can "wield incredible influence."

GOOGLE CEO, MAJOR TECH LEADERS JOIN FIRST LADY MELANIA TRUMP AT WHITE HOUSE AI MEETING

Matheny went on to explain the need for "influence" on the "outside" in the form of contractors working for government agencies specialized in fields like biology and chemistry along with experts at various think tanks. 

"That's another way you can have an influence on the government," Matheny said. 

Matheny advanced the philosophy’s ideals in the Biden White House in his roles as deputy assistant to the president for technology and national security, deputy director for national security in the Office of Science and Technology Policy and coordinator for technology and national security at the National Security Council. 

According to reporting by Politico, RAND officials were involved in writing former President Joe Biden’s 2023 executive order "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order mirrored many effective altruist goals regarding AI, such as the idea that "harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks." 

However, a RAND spokesperson told Fox News Digital that Matheny had "no role" crafting the Biden EO, but said its "researchers did provide technical expertise and analysis to inform the EO in response to requests from policymakers."

The order read that "responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security."

Its solution was to increase regulations on the development of AI and add new government reporting requirements for companies developing the technology. For many in the industry, this was seen as an example of government overreach that stifled innovation and hurt the U.S.’s ability to compete with countries like China. 

The order has since been revoked by Trump’s AI order, which was signed in the first few days of his second administration. However, as head of RAND, a public policy and research advising group, Matheny and RAND have continued to push their vision for AI regulation and warn about the potential pitfalls.

RAND has posted on social media in recent months warning that AI will "fundamentally reshape the economics of cybersecurity" and that the "growing use of AI chatbots for mental health support means society is 'deploying pseudo-therapists at an unprecedented scale.'"

Semafor reported earlier this year that the Trump administration was butting heads with Anthropic, a top artificial intelligence company with ties to the EA movement and the Biden administration, on AI policy. 

"It's hard to tell a clean story of every single actor involved, but at the heart, the Doomerism community that Jason's really at the heart of, what they are really concerned with is they truly believe about a runaway super-intelligent model that takes over the world like a Terminator scenario," the former DOD official told Fox News Digital, adding that the fear of Effective Altruists that AI is an "existential threat" has led to their push that is "restrictive" to the "growth of the technology."

"With respect to the Trump administration’s AI policies, much RAND analysis is focused on key parts of the President’s AI Action Plan, including analysis we’ve done on AI evaluations, secure data centers, energy options for AI, cybersecurity and biosecurity," the RAND spokesperson said.

"Mr. Matheny appreciates that the Trump Administration may have different views than the prior administration on AI policy," the spokesperson continued. "He remains committed, along with RAND, to contributing expertise and analysis to helping the Trump Administration shape policies to advance the United States’ interests."