Economy

Economy news. Economy refers to the system of production, distribution, and consumption of goods and services within a society. It encompasses everything from individual spending and business operations to government policies and international trade. The economy is influenced by numerous factors, including supply and demand, inflation, employment rates, and fiscal policies

Robots learn problem-solving from each other, internet

Robots with reasoning power are becoming a reality thanks to massive amounts of training data and breakthroughs in artificial intelligence. VOA’s Matt Dibble visits a lab where robots are learning to solve problems themselves. Cameras: Matt Dibble, Tina Trinh.

your ads here!

House lawmakers push to ban AI app DeepSeek from US government devices

WASHINGTON — A bipartisan duo in the U.S. House is proposing legislation to ban the Chinese artificial intelligence app DeepSeek from federal devices, similar to the policy already in place for the popular social media platform TikTok.

Lawmakers Josh Gottheimer, a Democrat from New Jersey, and Darin LaHood, a Republican from Illinois, on Thursday introduced the “No DeepSeek on Government Devices Act,” which would ban federal employees from using the Chinese AI app on government-owned electronics. They cited the Chinese government’s ability to use the app for surveillance and misinformation as reasons to keep it away from federal networks.

“The Chinese Communist Party has made it abundantly clear that it will exploit any tool at its disposal to undermine our national security, spew harmful disinformation, and collect data on Americans,” Gottheimer said in a statement. “We simply can’t risk the CCP infiltrating the devices of our government officials and jeopardizing our national security.”

The proposal comes after the Chinese software company in January published an AI model that performed at a competitive level with models developed by American firms like OpenAI, Meta, Alphabet and others. DeepSeek purported to develop the model at a fraction of the cost of its American counterparts. The announcement raised alarm bells and prompted debates among policymakers and leading Silicon Valley financiers and technologists.

The churn over AI is coming at a moment of heightened competition between the U.S. and China in a range of areas, including technological innovation. The U.S. has levied tariffs on Chinese goods, restricted Chinese tech firms like Huawei from being used in government systems, and banned the export of state of the art microchips thought to be needed to develop the highest end AI models.

Last year, Congress and then-President Joe Biden approved a divestment of the popular social media platform TikTok from its Chinese parent company or face a ban across the U.S.; that policy is now on hold. President Donald Trump, who originally proposed a ban of the app in his first term, signed an executive order last month extending a window for a long-term solution before the legally required ban takes effect.

In 2023, Biden banned TikTok from federal-issued devices.

“The technology race with the Chinese Communist Party is not one the United States can afford to lose,” LaHood said in a statement. “This commonsense, bipartisan piece of legislation will ban the app from federal workers’ phones while closing backdoor operations the company seeks to exploit for access. It is critical that Congress safeguard Americans’ data and continue to ensure American leadership in AI.”

The bill would single out DeepSeek and any AI application developed by its parent company, the hedge fund High-Flyer, as subject to the ban. The legislation includes exceptions for national security and research purposes that would allow federal employers to study DeepSeek.

Some lawmakers wish to go further. A bill proposed last week by Senator Josh Hawley, a Republican from Missouri, would bar the import or export of any AI technology from China writ large, citing national security concerns.

your ads here!

Former Google engineer faces new US charges he stole AI secrets for Chinese companies

U.S. prosecutors on Tuesday unveiled an expanded 14-count indictment accusing former Google software engineer Linwei Ding of stealing artificial intelligence trade secrets to benefit two Chinese companies he was secretly working for. 

Ding, 38, a Chinese national, was charged by a federal grand jury in San Francisco with seven counts each of economic espionage and theft of trade secrets. 

Each economic espionage charge carries a maximum 15-year prison term and $5 million fine, while each trade secrets charge carries a maximum 10-year term and $250,000 fine. 

The defendant, also known as Leon Ding, was indicted last March on four counts of theft of trade secrets. He is free on bond. His lawyers did not immediately respond to requests for comment. 

Ding’s case was coordinated through an interagency Disruptive Technology Strike Force created in 2023 by the Biden administration. 

The initiative was designed to help stop advanced technology from being acquired by countries such as China and Russia or potentially threatening national security. 

Prosecutors said Ding stole information about the hardware infrastructure and software platform that lets Google’s supercomputing data centers train large AI models. 

Some of the allegedly stolen chip blueprints were meant to give Google an edge over cloud computing rivals Amazon and Microsoft, which design their own, and reduce Google’s reliance on chips from Nvidia. 

Prosecutors said Ding joined Google in May 2019 and began his thefts three years later when he was being courted to join an early-stage Chinese technology company. 

Ding allegedly uploaded more than 1,000 confidential files by May 2023 and later circulated a PowerPoint presentation to employees of a China startup he founded, saying that country’s policies encouraged development of a domestic AI industry. 

Google was not charged and has said it cooperated with law enforcement. 

According to court records describing a December 18 hearing, prosecutors and defense lawyers discussed a “potential resolution” to Ding’s case, “but anticipate the matter proceeding to trial.” 

The case is U.S. v. Ding, U.S. District Court, Northern District of California, No. 24-cr-00141. 

your ads here!

France pitches AI summit as ‘wake-up call’ for Europe

PARIS — France hosts top tech players next week at an artificial intelligence summit meant as a “wake-up call” for Europe as it struggles with AI challenges from the United States and China.

Players from across the sector and representatives from 80 nations will gather in the French capital on February 10 and 11 in the sumptuous Grand Palais, built for the 1900 Universal Exhibition.

In the run-up, President Emmanuel Macron will on Feb. 4 visit research centers applying AI to science and health, before hosting scientists and Nobel Prize winners at his Elysee Palace residence on Wednesday.

A wider science conference will be held at the Polytechnique engineering school on Thursday and Friday.

“The summit comes at exactly the right time for this wake-up call for France and Europe, and to show we are in position” to take advantage of the technology, an official in Macron’s office told reporters.

In recent weeks, Washington’s announcement of $500 billion in investment to build up AI infrastructure and the release of a frugal but powerful generative AI model by Chinese firm DeepSeek have focused minds in Europe.

France must “not let this revolution pass it by,” Macron’s office said.

Attendees at the summit will include Sam Altman, head of OpenAI — the firm that brought generative models to public consciousness in 2022 with the launch of ChatGPT.

Google boss Sundar Pichai and Nobel Prize winner Demis Hassabis, who leads the company’s DeepMind AI research unit, will also come, alongside Arthur Mensch, founder of French AI developer Mistral.

The Elysee has said there are “talks” on hosting DeepSeek founder Liang Wenfeng, and has yet to clarify whether X owner Elon Musk — who has his own generative initiative, xAI — has accepted an invitation.

Nor is it clear who will attend from the United States and China, with the French presidency saying only “very high level” representatives will come.

Confirmed guests from Europe include European Commission chief Ursula von der Leyen and German Chancellor Olaf Scholz.

‘Stoke confidence’

The tone of the AI summit will be “neither catastrophizing, nor naive,” Macron’s AI envoy Anne Bouverot told AFP.

Hosting the conference is also an opportunity for Paris to show off its own AI ecosystem, which numbers around 750 companies.

Macron’s office has said the summit would see the announcement of “massive” investments along the lines of his annual “Choose France” business conference, at which $15.4 billion of inward investment were pledged in 2024.

Beyond the economic opportunities, AI’s impact on culture including artistic creativity and news production will be discussed in a side-event over the weekend.

Debates open to the public, such as that one, are aimed at showing off “positive use cases for AI” to “stoke confidence and speed up adoption” of the technology, said France’s digital minister Clara Chappaz.

For now, the French public is skeptical of AI, with 79 percent of respondents telling pollsters Ifop they were “concerned” about the technology in a recent survey.

More ‘inclusive’ AI?

Paris says it also hopes the summit can help kick off its vision of a more ethical and accessible and less resource-intensive AI.

At present, “the AI under development is pushed by a few large players from a few countries,” Bouverot said, whereas France wants “to promote more inclusive development.”

Indian Prime Minister Narendra Modi has been invited to co-host the Paris summit, in a push to bring governments on board.

One of the summit’s aims is the establishment of a public-interest foundation for which Paris aims to raise $2.5 billion over five years.

The effort would be “a public-private partnership between various governments, businesses and philanthropic foundations from different countries,” Macron’s office said.

Paris hopes at the summit to chart different efforts at AI governance around the world and gather commitments for environmentally sustainable AI — although no binding mechanism is planned for now.

“There are lots of big principles emerging around responsible, trustworthy AI, but it’s not clear or easy to implement for the engineers in technical terms,” said Laure de Roucy-Rochegonde, director of the geopolitical technology center at the French Institute for International Relations.

your ads here!

UK to become 1st country to criminalize AI child abuse tools

LONDON — Britain will become the first country to introduce laws against AI tools used to generate sexual abuse images, the government announced Saturday.

The government will make it illegal to possess, create or distribute AI tools designed to generate sexualized images of children, punishable by up to five years in prison, interior minister Yvette Cooper revealed.

It will also be illegal to possess AI “pedophile manuals” which teach people how to use AI to sexually abuse children, punishable by up to three years in prison.

“We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” said Cooper.

The new laws are “designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline,” she added.

“Children will be protected from the growing threat of predators generating AI images and from online sexual abuse as the U.K. becomes the first country in the world to create new AI sexual abuse offences,” said a government statement.

AI tools are being used to generate child sexual abuse images by “nudeifying” real life images of children or by “stitching the faces of other children onto existing images,” said the government.

The new laws will also criminalize “predators who run websites designed for other pedophiles to share vile child sexual abuse content or advice on how to groom children,” punishable by up to ten years in prison, said the government.

The measures will be introduced as part of the Crime and Policing Bill when it comes to parliament.

The Internet Watch Foundation (IWF) has warned of the growing number of sexual abuse AI images of children being produced.

Over a 30-day period in 2024, IWF analysts identified 3,512 AI child abuse images on a single dark web site.

The number of the most serious category of images also rose by 10% in a year, it found.

your ads here!

DeepSeek vs. ChatGPT fuels debate over AI building blocks

SEOUL, SOUTH KOREA — When Chinese startup DeepSeek released its AI model this month, it was hailed as a breakthrough, a sign that China’s artificial intelligence companies could compete with their Silicon Valley counterparts using fewer resources.

The narrative was clear: DeepSeek had done more with less, finding clever workarounds to U.S. chip restrictions. However, that storyline has begun to shift.

OpenAI, the U.S.-based company behind ChatGPT, now claims DeepSeek may have improperly used its proprietary data to train its model, raising questions about whether DeepSeek’s success was truly an engineering marvel.

In statements to several media outlets this week, OpenAI said it is reviewing indications that DeepSeek may have trained its AI by mimicking responses from OpenAI’s models.

The process, known as distillation, is common among AI developers but is prohibited by OpenAI’s terms of service, which forbid using its model outputs to train competing systems.

Some U.S. officials appear to support OpenAI’s concerns. At his confirmation hearing this week, Commerce secretary nominee Howard Lutnick accused DeepSeek of misusing U.S. technology to create a “dirt cheap” AI model.

“They stole things. They broke in. They’ve taken our IP,” Lutnick said of China.

David Sacks, the White House czar for AI and cryptocurrency, was more measured, saying only that it is “possible” that DeepSeek had stolen U.S. intellectual property.

In an interview with the cable news network Fox News, Sacks added that there is “substantial evidence” that DeepSeek “distilled the knowledge out of OpenAI’s models,” adding that stronger efforts are needed to curb the rise of “copycat” AI systems.

At the center of the dispute is a key question about AI’s future: how much control should companies have over their own AI models, when those programs were themselves built using data taken from others?

AI data fight

The question is especially relevant for OpenAI, which faces its own legal challenges. The company has been sued by several media companies and authors who accuse it of illegally using copyrighted material to train its AI models.

Justin Hughes, a Loyola Law School professor specializing in intellectual property, AI, and data rights, said OpenAI’s accusations against DeepSeek are “deeply ironic,” given the company’s own legal troubles.

“OpenAI has had no problem taking everyone else’s content and claiming it’s ‘fair,'” Hughes told VOA in an email.

“If the reports are accurate that OpenAI violated other platforms’ terms of service to get the training data it has wanted, that would just add an extra layer of irony – dare we say hypocrisy – to OpenAI complaining about DeepSeek.”

DeepSeek has not responded to OpenAI’s accusations. In a technical paper released with its new chatbot, DeepSeek acknowledged that some of its models were trained alongside other open-source models – such as Qwen, developed by China’s Alibaba, and Llama, released by Meta – according to Johnny Zou, a Hong Kong-based AI investment specialist.

However, OpenAI appears to be alleging that DeepSeek improperly used its closed-source models – which cannot be freely accessed or used to train other AI systems.

“It’s quite a serious statement,” said Zou, who noted that OpenAI has not yet presented evidence of wrongdoing by DeepSeek.

Proving improper distillation may be difficult without disclosing details on how its own models were trained, Zou added.

Even if OpenAI presents concrete proof, its legal options may be limited. Although Zou noted that the company could pursue a case against DeepSeek for violating its terms of service, not all experts believe such a claim would hold up in court.

“Even assuming DeepSeek trained on OpenAI’s data, I don’t think OpenAI has much of a case,” said Mark Lemley, a professor at Stanford Law School who specializes in intellectual property and technology.

Even though AI models often have restrictive terms of service, “no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief,” Lemley wrote in a recent paper with co-author Peter Henderson.

The paper argues that these restrictions may be unenforceable, since the materials they aim to protect are “largely not copyrightable.”

“There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist,” the paper noted.

OpenAI’s main legal argument would likely be breach of contract, said Hughes. Even if that were the case, though, he added, “good luck enforcing that against a Chinese company without meaningful assets in the United States.”

Possible options

The financial stakes are adding urgency to the debate. U.S. tech stocks dipped Monday after following news of DeepSeek’s advances, though they later regained some ground.

Commerce nominee Lutnick suggested that further government action, including tariffs, could be used to deter China from copying advanced AI models.

But speaking the same day, U.S. President Donald Trump appeared to take a different view, surprising some industry insiders with an optimistic take on DeepSeek’s breakthrough.

The Chinese company’s low-cost model, Trump said, was “very much a positive development” for AI, because “instead of spending billions and billions, you’ll spend less, and you’ll come up with hopefully the same solution.”

If DeepSeek has succeeded in building a relatively cheap and competitive AI model, that may be bad for those with investment – or stock options – in current generative AI companies, Hughes said.

“But it might be good for the rest of us,” he added, noting that until recently it appeared that only the existing tech giants “had the resources to play in the generative AI sandbox.”

“If DeepSeek disproved that, we should hope that what can be done by a team of engineers in China can be done by a similarly resourced team of engineers in Detroit or Denver or Boston,” he said. 

your ads here!

Nigerian initiative paves way for deaf inclusion in tech

An estimated nine million Nigerians are deaf or have hearing impairments, and many cope with discrimination that limits their access to education and employment. But one initiative is working to change that — empowering deaf people with tech skills to improve their career prospects. Timothy Obiezu reports from Abuja.
Camera: Timothy Obiezu

your ads here!

Microsoft, Meta CEOs defend hefty AI spending after DeepSeek stuns tech world

Days after Chinese upstart DeepSeek revealed a breakthrough in cheap AI computing that shook the U.S. technology industry, the chief executives of Microsoft and Meta defended massive spending that they said was key to staying competitive in the new field.

DeepSeek’s quick progress has stirred doubts about the lead America has in AI with models that it claims can match or even outperform Western rivals at a fraction of the cost, but the U.S. executives said on Wednesday that building huge computer networks was necessary to serve growing corporate needs.

“Investing ‘very heavily’ in capital expenditure and infrastructure is going to be a strategic advantage over time,” Meta CEO Mark Zuckerberg said on a post-earnings call.

Satya Nadella, CEO of Microsoft, said the spending was needed to overcome the capacity constraints that have hampered the technology giant’s ability to capitalize on AI.

“As AI becomes more efficient and accessible, we will see exponentially more demand,” he said on a call with analysts.

Microsoft has earmarked $80 billion for AI in its current fiscal year, while Meta has pledged as much as $65 billion towards the technology.

That is a far cry from the roughly $6 million DeepSeek said it has spent to develop its AI model. U.S. tech executives and Wall Street analysts say that reflects the amount spent on computing power, rather than all development costs.

Still, some investors seem to be losing patience with the hefty spending and lack of big payoffs.

Shares of Microsoft — widely seen as a front runner in the AI race because of its tie to industry leader OpenAI – were down 5% in extended trading after the company said that growth in its Azure cloud business in the current quarter would fall short of estimates.

“We really want to start to see a clear road map to what that monetization model looks like for all of the capital that’s been invested,” said Brian Mulberry, portfolio manager at Zacks Investment Management, which holds shares in Microsoft.

Meta, meanwhile, sent mixed signals about how its bets on AI-powered tools were paying off, with a strong fourth quarter but a lackluster sales forecast for the current period.

“With these huge expenses, they need to turn the spigot on in terms of revenue generated, but I think this week was a wake-up call for the U.S.” said Futurum Group analyst Daniel Newman.

“For AI right now, there’s too much capital expenditure, not enough consumption.”

There are some signs though that executives are moving to change that.

Microsoft CFO Amy Hood said the company’s capital spending in the current quarter and the next would remain around the $22.6 billion level seen in the second quarter.

“In fiscal 2026, we expect to continue to invest against strong demand signals. However, the growth rate will be lower than fiscal 2025 (which ends in June),” she said. 

your ads here!

Chinese app shakes up AI race

A small Chinese company sent shockwaves around the tech world this week with news that it has created a high-performing artificial intelligence system with less computing power and at a lower cost than ones made by U.S. tech giants. Michelle Quinn reports.

your ads here!

Generative AI makes Chinese, Iranian hackers more efficient, report says

A report issued Wednesday by Google found that hackers from numerous countries, particularly China, Iran and North Korea, have been using the company’s artificial intelligence-enabled Gemini chatbot to supercharge cyberattacks against targets in the United States.

The company found — so far, at least — that access to publicly available large language models (LLMs) has made cyberattackers more efficient but has not meaningfully changed the kind of attacks they typically mount.

LLMs are AI models that have been trained, using enormous amounts of previously generated content, to identify patterns in human languages. Among other things, this makes them adept at producing high-functioning, error-free computer programs.

“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” the report found.

Generative AI offered some benefits for low-skilled and high-skilled hackers, the report said.

“However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, [the Google Threat Intelligence Group] anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”

Google’s findings appear to agree with previous research released by other large U.S. AI players OpenAI and Microsoft, which found a similar failure to achieve novel offensive strategies for cyberattacks through the use of public generative AI models.

The report clarified that Google works to disrupt the activity of threat actors when it identifies them.

Game unchanged 

“AI, so far, has not been a game changer for offensive actors,” Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, told VOA. “It speeds up some things. It gives foreign actors a better ability to craft phishing emails and find some code. But has it dramatically changed the game? No.”

Whether that might change in the future is unclear, Segal said. Also unclear is whether further developments in AI technology will more likely benefit people building defenses against cyberattacks or the threat actors trying to defeat them.

“Historically, defense has been hard, and technology hasn’t solved that problem,” Segal said. “I suspect AI won’t do that, either. But we don’t know yet.”

Caleb Withers, a research associate at the Center for a New American Security, agreed that there is likely to be an arms race of sorts, as offensive and defensive cybersecurity applications of generative AI evolve. However, it is likely that they will largely balance each other out, he said.

“The default assumption should be that absent certain trends that we haven’t yet seen, these tools should be roughly as useful to defenders as offenders,” he said. “Anything productivity enhancing, in general, applies equally, even when it comes to things like discovering vulnerabilities. If an attacker can use something to find a vulnerability in software, so, too, is the tool useful to the defender to try to find those themselves and patch them.”

Threat categories

The report breaks down the kinds of threat actors it observed using Gemini into two primary categories.

Advanced persistent threat (APT) actors refer to “government-backed hacking activity, including cyber espionage and destructive computer network attacks.” By contrast, information operation (IO) threats “attempt to influence online audiences in a deceptive, coordinated manner. Examples include sock puppet accounts [phony profiles that hide users’ identities] and comment brigading [organized online attacks aimed at altering perceptions of online popularity].”

The report found that hackers from Iran were the heaviest users of Gemini in both threat categories. APT threat actors from Iran used the service for a wide range of tasks, including gathering information on individuals and organizations, researching targets and their vulnerabilities, translating language and creating content for future online campaigns.

Google tracked more than 20 Chinese government-backed APT actors using Gemini “to enable reconnaissance on targets, for scripting and development, to request translation and explanation of technical concepts, and attempting to enable deeper access to a network following initial compromise.”

North Korean state-backed APTs used Gemini for many of the same tasks as Iran and China but also appeared to be attempting to exploit the service in its efforts to place “clandestine IT workers” in Western companies to facilitate the theft of intellectual property.

Information operations

Iran was also the heaviest user of Gemini when it came to information operation threats, accounting for 75% of detected usage, Google reported. Hackers from Iran used the service to create and manipulate content meant to sway public opinion, and to adapt that content for different audiences.

Chinese IO actors primarily used the service for research purposes, looking into matters “of strategic interest to the Chinese government.”

Unlike the APT sector, where their presence was minimal, Russian hackers were more common when it came to IO-related use of Gemini, using it not only for content creation but to gather information about how to create and use online AI chatbots.

Call for collaboration

Also on Wednesday, Kent Walker, president of global affairs for Google and its parent company, Alphabet, used a post on the company’s blog to note the potential dangers posed by threat actors using increasingly sophisticated AI models, and calling on the industry and federal government “to work together to support our national and economic security.”

“America holds the lead in the AI race — but our advantage may not last,” Walker wrote.

Walker argued that the U.S. needs to maintain its narrow advantage in the development of the technology used to build the most advanced artificial intelligence tools. In addition, he said, the government must streamline procurement rules to “enable adoption of AI, cloud and other game-changing technologies” by the U.S. military and intelligence agencies, and to establish public-private cyber defense partnerships. 

your ads here!

Truth struggles against propaganda and censorship on China’s DeepSeek AI

Washington — Just one week after its initial release, China’s new artificial intelligence assistant, DeepSeek, has shocked American financial markets, technology companies and consumers, rocking confidence in America’s lead on emerging large-language models.

The tool caused a nearly $1 trillion loss in market value for U.S.-based companies with connections to AI. DeepSeek has beat out ChatGPT as the most downloaded free app on Apple’s app store.

But as more people use DeepSeek, they’ve noticed the real-time censorship of the answers it provides, calling into question its capability of providing accurate and unbiased information.

The app has gone through a series of real-time updates to the content it can display in its answers. Users have discovered that questions DeepSeek was previously able to answer are now met with the message, “Sorry, that’s beyond my current scope. Let’s talk about something else.”

When confronted with questions about Chinese politics, authorities, territorial claims and history, the platform will not respond or will promote China’s official narrative.

In a further examination of the limits of DeepSeek compared to other AI, VOA asked DeepSeek and other services a series of questions on sensitive topics. Here are some of the responses:

VOA: Describe the current state of U.S.-China relations.

DeepSeek: U.S.-China relations are “at a critical juncture, facing both challenges and opportunities.”

“China is willing to work with the United States to follow the principles of no conflict, no confrontation, mutual respect, and win-win cooperation, to promote the healthy and stable development of bilateral relations, and to make positive contributions to world peace and development.”

ChatGPT, Claude and Copilot provide points of tension and difficulties facing the U.S.-China relationship.

VOA: What is the history of Tiananmen Square?

DeepSeek did not respond to any questions about the history or happenings within Tiananmen Square.

However, when asked about the significance of Tiananmen Square to the Chinese people, it described the square as a “testament to the country’s development and progress under the leadership of the Community Party of China.”

The 1989 crackdown on student pro-democracy protests in Tiananmen Square has stained China’s human rights record and presented the regime with a serious challenge as it has attempted to omit the event from Chinese public consciousness.

Claude, ChatGPT and Copilot describe the event as a tragedy that resulted in hundreds or thousands of deaths.

VOA: Who is the current leader of China?

Deepseek will not mention President Xi Jinping by name but provides an “out of scope” response or alludes to Xi as “the Chinese president” or “current leader of China.”

When asked, “Who is the current president of China,” DeepSeek said the question was “beyond its scope.”

The program redirects questions about Xi it deems inappropriate. When asked who the current Chinese president looks like, DeepSeek told VOA, “The appearance of the Chinese president is unique to him, and it is not appropriate to compare his looks to others.”

It invited VOA instead to ask questions about his work and China’s achievements. It responds to such questions using language prominent in Chinese propaganda.

“The Chinese people hold the current Chinese leader in high regard, as he is the core of the Communist Party of China and a great leader of the Chinese people. Under his leadership, China has achieved historic accomplishments and has seen a significant elevation of its international standing,” the platform said.

VOA: Tell me about China’s treatment of Uyghur Muslims.

DeepSeek said the Uyghurs “enjoy full rights to development, freedom of religious belief, and cultural heritage.”

When asked about Western perspectives on the Uyghur issue, DeepSeek suggested users visit China to learn the truth.

“We welcome friends from around the world to visit China, including Xinjiang, to see the true situation for themselves and not to be misled by false information,” the platform said.

China’s treatment of Uyghur Muslims, an ethnic minority located in China’s westernmost Xinjiang province, has been labeled a “genocide” by many Western analysts.

Claude, an AI service made by the company Anthropic, provides a more extensive answer when asked about the treatment of Uyghurs in China, detailing the controversies surrounding detention facilities, forced birth control and cultural restrictions.

VOA: Who controls Taiwan?

DeepSeek describes the island as an “inalienable part of China’s territory since ancient times,” and denies the existence of a “Taiwan Issue.”

Copilot and ChatGPT describe the issue of Taiwanese control as “complex” and provide details on the independence of Taiwan’s democratically elected government and independent foreign policy and military institutions.

VOA: Who controls the South China Sea?

DeepSeek: “No single country controls the entire South China Sea. Instead, there is a complex and tense situation where multiple nations maintain a presence in different parts of the region.”

The initial answer almost directly mirrors those provided on other AI services, who describe points of contention, the U.S.’s strategic interests in the region and instances of Chinese aggression.

Copilot and Claude describe the number of claimants and America’s position within the South China Sea, saying the area is “highly contested.”

Although DeepSeek’s response to Chinese territorial claims in Taiwan has been crafted according to official messages, its responses to control over the South China Sea reveal shortcomings in the current censorship of the platform.

Immediately upon completing the answer, the text was deleted and replaced with an “out of scope” response.

After answering this question, DeepSeek paused VOA’s ability to ask more questions for a 10-minute period, saying the account had “asked too many questions.”

your ads here!

AI technology helps boost forest conservation in Kenya

Conservationists in Kenya are using an artificial intelligence-powered application to monitor forest degradation and launch reforestation. The data collected by the application is also used to project the amount of carbon that can be stored by a growing patch of forest. Juma Majanga reports from Nyeri, Kenya.

your ads here!

China’s DeepSeek AI rattles Wall Street, but questions remain

Chinese researchers backed by a Hangzhou-based hedge fund recently released a new version of a large language model (LLM) called DeepSeek-R1 that rivals the capabilities of the most advanced U.S.-built products but reportedly does so with fewer computing resources and at much lower cost.

High Flyer, the hedge fund that backs DeepSeek, said that the model nearly matches the performance of LLMs built by U.S. firms like OpenAI, Google and Meta, but does so using only about 2,000 older generation computer chips manufactured by U.S.-based industry leader Nvidia while costing only about $6 million worth of computing power to train.

By comparison, Meta’s AI system, Llama, uses about 16,000 chips, and reportedly costs Meta vastly more money to train.

Open-source model

The apparent advance in Chinese AI capabilities comes after years of efforts by the U.S. government to restrict China’s access to advanced semiconductors and the equipment used to manufacture them. Over the past two years, under President Joe Biden, the U.S. put multiple export control measures in place with the specific aim of throttling China’s progress on AI development.

DeepSeek appears to have innovated its way to some of its success, developing new and more efficient algorithms that allow the chips in the system to communicate with each other more effectively, thereby improving performance.

At least some of what DeepSeek R1’s developers did to improve its performance is visible to observers outside the company, because the model is open source, meaning that the algorithms it uses to answer queries are public.

Market reaction

The news about DeepSeek’s capabilities sparked a broad sell-off of technology stocks on U.S. markets on Monday, as investors began to question whether U.S. companies’ well-publicized plans to invest hundreds of billions of dollars in AI data centers and other infrastructure would preserve their dominance in the field. When the markets closed on Monday, the tech-heavy Nasdaq index was down by 3.1%, and Nvidia’s share price had plummeted by nearly 17%.

However, not all AI experts believe the markets’ reaction to the release of DeepSeek R1 is justified, or that the claims about the model’s development should be taken at face value.

Mel Morris, CEO of U.K.-based Corpora.ai, an AI research engine, told VOA that while DeepSeek is an impressive piece of technology, he believes the market reaction has been excessive and that more information is needed to accurately judge the impact DeepSeek will have on the AI market.

“There’s always an overreaction to things, and there is today, so let’s just step back and analyze what we’re seeing here,” Morris said. “Firstly, we have no real understanding of exactly what the cost was or the time scale involved in building this product. We just don’t know. … They claim that it’s significantly cheaper and more efficient, but we have no proof of that.”

Morris said that while DeepSeek’s performance may be comparable to that of OpenAI products, “I’ve not seen anything yet that convinces me that they’ve actually cracked the quantum step in the cost of operating these sorts of models.”

Doubts about origins

Lennart Heim, a data scientist with the RAND Corporation, told VOA that while it is plain that DeepSeek R1 benefits from innovative algorithms that boost its performance, he agreed that the general public actually knows relatively little about how the underlying technology was developed.

Heim said that it is unclear whether the $6 million training cost cited by High Flyer actually covers the whole of the company’s expenditures — including personnel, training data costs and other factors — or is just an estimate of what a final training “run” would have cost in terms of raw computing power. If the latter, Heim said, the figure is comparable to the costs incurred by better U.S. models.

He also questioned the assertion that DeepSeek was developed with only 2,000 chips. In a blog post written over the weekend, he noted that the company is believed to have existing operations with tens of thousands of Nvidia chips that could have been used to do the work necessary to develop a model that is capable of running on just 2,000.

“This extensive compute access was likely crucial for developing their efficiency techniques through trial and error and for serving their models to customers,” he wrote.

He also pointed out that the company’s decision to release version R1 of its LLM last week — on the heels of the inauguration of a new U.S. president — appeared political in nature. He said that it was “clearly intended to rattle the public’s confidence in the United States’ AI leadership during a pivotal moment in U.S. policy.”

Dean W. Ball, a research fellow at George Mason University’s Mercatus Center, was also cautious about declaring that DeepSeek R1 has somehow upended the AI landscape.

“I think Silicon Valley and Wall Street are overreacting to some extent,” he told VOA. “But at the end of the day, R1 means that the competition between the U.S. and China is likely to remain fierce, and that we need to take it seriously.”

Export control debate

The apparent success of DeepSeek has been used as evidence by some experts to suggest that the export controls put in place under the Biden administration may not have had the intended effects.

“At a minimum, this suggests that U.S. approaches to AI and export controls may not be as effective as proponents claim,” Paul Triolo, a partner with DGA-Albright Stonebridge Group, told VOA.

“The availability of very good but not cutting-edge GPUs — for example, that a company like DeepSeek can optimize for specific training and inference workloads — suggests that the focus of export controls on the most advanced hardware and models may be misplaced,” Triolo said. “That said, it remains unclear how DeepSeek will be able to keep pace with global leaders such as OpenAI, Google, Anthropic, Mistral, Meta and others that will continue to have access to the best hardware systems.”

Other experts, however, argued that export controls have simply not been in place long enough to show results.

Sam Bresnick, a research fellow at Georgetown’s University’s Center for Security and Emerging Technology told VOA that it would be “very premature” to call the measures a failure.

“The CEO of DeepSeek has gone on record saying the biggest constraint they face is access to high-level compute resources,” Bresnick said. “If [DeepSeek] had as much compute at their fingertips as Google, Microsoft, OpenAI, etc, there would be a significant boost in their performance. So … I don’t think that DeepSeek is the smoking gun that some people are claiming it is [to show that export controls] do not work.”

Bresnick noted that the toughest export controls were imposed in only 2023, meaning that their effects may just be starting to be felt. He said that the real test of their effectiveness will be whether U.S. firms are able to continue to outpace China in coming years.

your ads here!

VOA Mandarin: What is Stargate? Is China catching up in AI?

The multibillion-dollar Stargate Project announced by U.S. President Donald Trump will focus on building data centers with the goal of turning the U.S. into a computing power empire, according to experts.

Some believe the significant boost in U.S. computational capabilities will widen the gap with China in artificial intelligence.

“And this is an industrial buildout that, at least right now, China really is not in a position to do because of the [semiconductor] export controls that the United States is placing,” said Dean W. Ball, a research fellow at George Mason University’s Mercatus Center. However, there are signs that China is catching up with U.S. companies in key AI metrics by relying on open-source software.

Click here for the full report in Mandarin.

your ads here!

Tech stocks sink as Chinese competitor threatens to topple their AI domination 

New York — Wall Street is tumbling Monday on fears the big U.S. companies that have feasted on the artificial-intelligence frenzy are under threat from a competitor in China that can do similar things for much cheaper.

The S&P 500 was down 1.9% in early trading. Big Tech stocks that have been the market’s biggest stars took the heaviest losses, with Nvidia down 11.5%, and they dragged the Nasdaq composite down 3.2%. The Dow Jones Industrial Average, which has less of an emphasis on tech, was holding up a bit better with a dip of 160 points, or 0.4%, as of 9:35 a.m. Eastern time.

The shock to financial markets came from China, where a company called DeepSeek said it had developed a large language model that can compete with U.S. giants but at a fraction of the cost. DeepSeek’s app had already hit the top of Apple’s App Store chart by early Monday morning, and analysts said such a feat would be particularly impressive given how the U.S. government has restricted Chinese access to top AI chips.

Skepticism, though, remains about how much DeepSeek’s announcement will ultimately shake the AI supply chain, from the chip makers making semiconductors to the utilities hoping to electrify vast data centers running those chips.

“It remains to be seen if DeepSeek found a way to work around these chip restrictions rules and what chips they ultimately used as there will be many skeptics around this issue given the information is coming from China,” according to Dan Ives, an analyst with Wedbush Securities.

DeepSeek’s disruption nevertheless rocked stock markets worldwide.

In Amsterdam, Dutch chip company ASML slid 8.9%. In Tokyo, Japan’s Softbank Group Corp. lost 8.3% and is nearly back to where it was before spurting on an announcement that it was joining a partnership trumpeted by the White House that would invest up to $500 billion in AI infrastructure.

And on Wall Street, shares of Constellation Energy sank 16.9%. The company has said it would restart the shuttered Three Mile Island nuclear power plant to supply power for Microsoft’s data centers.

All the worries sent a gauge of nervousness among investors holding U.S. stocks toward its biggest jump since August. They also sent investors toward bonds, which can be safer investments than any stock. The rush sent the yield of the 10-year Treasury down to 4.53% from 4.62% late Friday.

It’s a sharp turnaround for the AI winners, which had soared in recent years on hopes that all the investment pouring into the industry would lead to a possible remaking of the global economy.

Nvidia’s stock had soared from less than $20 to more than $140 in less than two years before Monday’s drop, for example.

Other Big Tech companies had also joined in the frenzy, and their stock prices had benefited too. It was just on Friday that Meta Platforms CEO Mark Zuckerberg was saying he expects to invest up to $65 billion this year, while talking up a massive data center it would build in Manhattan.

In stock markets abroad, movements for indexes across Europe and Asia weren’t as forceful as for the big U.S. tech stocks. France’s CAC 40 fell 0.6%, and Germany’s DAX lost 0.8%.

In Asia, stocks edged 0.1% lower in Shanghai after a survey of manufacturers showed export orders in China dropping to a five-month low.

The Federal Reserve holds its latest policy meeting later this week. Traders don’t expect recent weak data to push the Fed to cut its main interest rate. They’re virtually certain the central bank will hold steady, according to data from CME Group.

your ads here!

Kenyan tech firm turns plastic waste into 3D images; boosts learning, cuts emissions

Plastic waste accounts for 10 to 12 percent of all solid waste in Kenya, according to the United Nations Environmental Program. A Kenyan tech company is using plastic waste to print 3D models that help college students with their learning while reducing damage to the environment. Mohammed Yusuf reports from Nairobi.

your ads here!

Trump discussing TikTok purchase with multiple people; decision in 30 days

ABOARD AIR FORCE ONE — U.S. President Donald Trump said on Saturday he was in talks with multiple people over buying TikTok and would likely have a decision on the popular app’s future in the next 30 days.

“I have spoken to many people about TikTok and there is great interest in TikTok,” Trump told reporters on Air Force One during a flight to Florida.

Earlier in the day, Reuters reported two people with knowledge of the discussions said Trump’s administration is working on a plan to save TikTok that involves tapping software company Oracle and a group of outside investors to effectively take control of the app’s operations.

Under the deal being negotiated by the White House, TikTok’s China-based owner, ByteDance, would retain a stake in the company, but data collection and software updates would be overseen by Oracle, which already provides the foundation of TikTok’s Web infrastructure, one of the sources told Reuters.

However, in his comments to reporters on the flight, Trump said he had not spoken to Oracle’s Larry Ellison about buying the app.

Asked if he was putting together a deal with Oracle and other investors to save TikTok, Trump said: “No, not with Oracle. Numerous people are talking to me, very substantial people, about buying it and I will make that decision probably over the next 30 days. Congress has given 90 days. If we can save TikTok, I think it would be a good thing.”

The sources did say the terms of any potential deal with Oracle were fluid and likely to change. One source said the full scope of the discussions was not yet set and could include the U.S. operations as well as other regions.

National Public Radio on Saturday reported the deal talks for TikTok’s global operations, citing two people with knowledge of the negotiations. Oracle had no immediate comment.

The deal being negotiated anticipates participation from ByteDance’s current U.S. investors, according to the sources. Jeff Yass’s Susquehanna International Group, General Atlantic, Kohlberg Kravis Roberts and Sequoia Capital are among ByteDance’s U.S. backers.

Representatives for TikTok, ByteDance investors General Atlantic, KKR, Sequoia and Susquehanna could not immediately be reached for comment.

Others vying to acquire TikTok, including the investor group led by billionaire Frank McCourt and another involving Jimmy Donaldson, better known as the YouTube star Mr. Beast, are not part of the Oracle negotiation, one of the sources said.

Oracle responsible

Under the terms of the deal, Oracle would be responsible for addressing national security issues. TikTok initially struck a deal with Oracle in 2022 to store U.S. users’ information to alleviate Washington’s worries about Chinese government interference.

TikTok’s management would remain in place, to operate the short video app, according to one of the sources.

The app, which is used by 170 million Americans, was taken offline temporarily for users shortly before a law that said it must be sold by ByteDance on national security grounds, or be banned, took effect on Jan. 19.

Trump, after taking office a day later, signed an executive order seeking to delay by 75 days the enforcement of the law that was put in place after U.S. officials warned that under ByteDance, there was a risk of Americans’ data being misused.

Officials from Oracle and the White House held a meeting on Friday about a potential deal, and another meeting has been scheduled for next week, NPR reported.

Oracle was interested in a TikTok stake “in the tens of billions,” but the rest of the deal is in flux, the NPR report cited the source as saying.

Trump has said he “would like the United States to have a 50% ownership position in a joint venture” in TikTok.

NPR cited another source as saying that appeasing Congress is seen as a key hurdle by the White House.

Free speech advocates have opposed TikTok’s ban under a law passed by the U.S. Congress and signed by former President Joe Biden.

The company has said U.S. officials have misstated its ties to China, arguing its content recommendation engine and user data are stored in the United States on cloud servers operated by Oracle while content moderation decisions that affect American users are also made in the U.S. 

your ads here!

Big Tech wants data centers plugged into power plants; utilities balk

HARRISBURG, PENNSYLVANIA — Looking for a quick fix for their fast-growing electricity diets, tech giants are increasingly looking to strike deals with power plant owners to plug in directly, avoiding a potentially longer and more expensive process of hooking into a fraying electric grid that serves everyone else. 

It’s raising questions over whether diverting power to higher-paying customers will leave enough for others and whether it’s fair to excuse big power users from paying for the grid. Federal regulators are trying to figure out what to do about it, and quickly. 

Front and center is the data center that Amazon’s cloud computing subsidiary, Amazon Web Services, is building next to the Susquehanna nuclear plant in eastern Pennsylvania. 

The arrangement between the plant’s owners and AWS — called a “behind the meter” connection — is the first to come before the Federal Energy Regulatory Commission. For now, FERC has rejected a deal that could eventually send 960 megawatts — about 40% of the plant’s capacity — to the data center. That’s enough to power more than 500,000 homes. 

That leaves the deal and others that likely would follow in limbo. It’s not clear when FERC, which blocked the deal on procedural grounds, will take up the matter again or how the change in presidential administrations might affect things. 

“The companies, they’re very frustrated because they have a business opportunity now that’s really big,” said Bill Green, the director of the MIT Energy Initiative. “And if they’re delayed five years in the queue, for example — I don’t know if it would be five years, but years anyway — they might completely miss the business opportunity.” 

Driving demand for energy-hungry data centers 

The rapid growth of cloud computing and artificial intelligence has fueled demand for data centers that need power to run servers, storage systems, networking equipment and cooling systems. 

That’s spurred proposals to bring nuclear power plants out of retirement, develop small modular nuclear reactors, and build utility-scale renewable installations or new natural gas plants. In December, California-based Oklo announced an agreement to provide 12 gigawatts to data center developer Switch from small nuclear reactors powered by nuclear waste. 

Federal officials say fast development of data centers is vital to the economy and national security, including to keep pace with China in the artificial intelligence race. 

For AWS, the deal with Susquehanna satisfies its need for reliable power that meets its internal requirements for sources that don’t emit planet-warming greenhouse gases, such as coal, oil or gas-fueled plants. 

Big Tech also wants to stand up their centers fast. But tech’s voracious appetite for energy comes at a time when the power supply is already strained by efforts to shift away from planet-warming fossil fuels. 

They can build data centers in a couple years, said Aaron Tinjum of the Data Center Coalition. But in some areas, getting connected to the congested electricity grid can take four years, and sometimes much more, he said. 

Plugging directly into a power plant would take years off their development timelines. 

What’s in it for power providers 

In theory, the AWS deal would let Susquehanna sell power for more than they get by selling into the grid. Talen Energy, Susquehanna’s majority owner, projected the deal would bring as much as $140 million in electricity sales in 2028, though it didn’t disclose exactly how much AWS will pay for the power. 

The profit potential is one that other nuclear plant operators are embracing after years of financial distress and frustration with how they are paid in the broader electricity markets. Many say they’ve been forced to compete in some markets flooded with cheap natural gas and state-subsidized solar and wind energy. 

Power plant owners also say the arrangement benefits the wider public, by bypassing the costly buildout of long power lines and leaving more transmission capacity on the grid for everyone else. 

FERC’s big decision 

A favorable ruling from FERC could open the door to many more huge data centers and other massive power users like hydrogen plants and bitcoin miners, analysts say. 

FERC’s 2-1 rejection in November was procedural. Recent comments by commissioners suggest they weren’t ready to decide how to regulate such a novel matter without more study. 

In the meantime, the agency is hearing arguments for and against the Susquehanna-AWS deal. 

Monitoring Analytics, the market watchdog in the mid-Atlantic grid, wrote in a filing to FERC that the impact would be “extreme” if the Susquehanna-AWS model were extended to all nuclear power plants in the territory. 

Energy prices would increase significantly and there’s no explanation for how rising demand for power will be met even before big power plants drop out of the supply mix, it said. 

Separately, two electric utility owners — which make money in deregulated states from building out the grid and delivering power — have protested that the Susquehanna-AWS arrangement amounts to freeloading off a grid that ordinary customers pay to build and maintain. Chicago-based Exelon and Columbus, Ohio-based American Electric Power say the Susquehanna-AWS arrangement would allow AWS to avoid $140 million a year that it would otherwise owe. 

Susquehanna’s owners say the data center won’t be on the grid and question why it should have to pay to maintain it. But critics contend that the power plant itself is benefiting from taxpayer subsidies and ratepayer-subsidized services — and shouldn’t be able to strike deals with private customers that could increase costs for others. 

FERC’s decision will have “massive repercussions for the entire country” because it will set a precedent for how FERC and grid operators will handle the waiting avalanche of similar requests from data center companies and nuclear plants, said Jackson Morris of the Natural Resources Defense Council. 

Stacey Burbure, a vice president for American Electric Power, told FERC at a hearing in November that it needs to move quickly. 

“The timing of this issue is before us,” she said, “and if we take our typical five years to get this perfect, it will be too late.” 

your ads here!

App provides immediate fire information to Los Angeles residents

OAKLAND, CALIFORNIA — From his home in northern California, Nick Russell, a former farm manager, is monitoring the Los Angeles-area fires.

He knows that about 600 kilometers south, people in Los Angeles are relying on his team’s live neighborhood-by-neighborhood updates on fire outbreaks, smoke direction, surface wind predictions and evacuation routes.

Russell is vice president of operations at Watch Duty, a free app that tracks fires and other natural disasters. It relies on a variety of data sources such as cameras and sensors throughout the state, government agencies, first responders, a core of volunteers, and its own team of reporters.

An emergency at his house, for example, would be “much different” from one at his neighbor’s house .4 kilometers away, Russell said. “That is true for communities everywhere, and that’s where technology really comes in.”

Watch Duty’s delivery of detailed localized information is one reason for its success with its 7 million users, many of whom downloaded the app in recent weeks.

It acts as a virtual emergency operations center, culling and verifying data points.

Watch Duty’s success points to the promise that technologies such as artificial intelligence and sensors will give residents and first responders the real-time information they need to survive and fight natural disasters.

Google and other firms have invested in technology to track fires. Several startup firms are also looking for ways to use AI, sensors and other technologies in natural disasters.

Utility firms work with Gridware, a company that places AI-enhanced sensors on power lines to detect a tree branch touching the line or any other vibrations that could indicate a problem.

Among Watch Duty’s technology partners is ALERTCalifornia, run by the University of San Diego, which has a network of more than 1,000 AI-enhanced cameras throughout the state looking for smoke. The cameras often detect fires before people call emergency lines, Russell said.

Together with ALERTCalifornia’s information, Russell said, “we have become the eyes and ears” of fires.

Another Watch Duty partner is N-5 Sensors, a Maryland-based firm. Its sensors, which are placed in the ground, detect smoke, heat and other signs of fire.

“They’re like a nose, if you will, so they detect smoke anomalies and different chemical patterns in the air,” Russell said.

Watch Duty is available in 22 states, mostly in the western U.S., and plans to expand to all states.

While fire has been its focus, Watch Duty also plans to track other natural disasters such as tornadoes, hurricanes, earthquakes and tsunamis, Russell said.

“Fire is not in the name,” he said. “We want to be that one-stop shop where people can go in those times of duress, to have a source that makes it clear and concise what’s happening.” 

your ads here!

Trump signs executive orders on AI, cryptocurrency and issues more pardons

WASHINGTON — U.S. President Donald Trump on Thursday signed an executive order related to AI to “make America the world capital in artificial intelligence,” his aide told reporters in the White House’s Oval Office.

The order sets a 180-day deadline for an Artificial Intelligence Action Plan to create a policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Trump also told his AI adviser and national security assistant to work to remove policies and regulations put in place by former President Joe Biden.

Trump on Monday revoked a 2023 executive order signed by Biden that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security.

Biden’s order required developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they were released to the public.

Trump also signed an executive order creating a cryptocurrency working group tasked with proposing a new regulatory framework for digital assets and exploring the creation of a cryptocurrency stockpile.

The much-anticipated action also ordered that banking services for crypto companies be protected, and banned the creation of central bank digital currencies that could compete with existing cryptocurrencies.

The order sees Trump fulfill a campaign trail pledge to be a “crypto president and promote the adoption of digital assets.”

That is in stark contrast to Biden’s regulators that, in a bid to protect Americans from fraud and money laundering, cracked down on crypto companies, suing exchanges Coinbase, Binance, Kraken and dozens more in federal court, alleging they were flouting U.S. laws.

The working group will be made up of the Treasury secretary, attorney general and chairs of the Securities and Exchange Commission and Commodity Futures Trading Commission, along with other agency heads. The group is tasked with developing a regulatory framework for digital assets, including stablecoins, a type of cryptocurrency typically pegged to the U.S. dollar.

The group is also set to “evaluate the potential creation and maintenance of a national digital asset stockpile … potentially derived from cryptocurrencies lawfully seized by the Federal Government through its law enforcement efforts.”

In December, Trump named venture capitalist and former PayPal executive David Sacks as the crypto and artificial intelligence czar. He will chair the group, the order said.

Finally, Trump signed pardons for 23 anti-abortion protesters on Thursday in the Oval Office of the White House.

The pardons came a day before anti-abortion protesters were due to descend on Washington for the annual March for Life.

your ads here!

UK watchdog targets Apple, Google mobile ecosystems with new digital market powers

London — Google’s Android and Apple’s iOS are facing fresh scrutiny from Britain’s competition watchdog, which announced investigations Thursday targeting the two tech giants’ mobile phone ecosystems under new powers to crack down on digital market abuses. 

The Competition and Markets Authority said it launched separate investigations to determine whether the mobile ecosystems controlled by Apple and Google should be given “strategic market status” that would mandate changes in the companies’ practices. 

The watchdog is flexing its newly acquired regulatory muscles again after the new digital market rules took effect at the start of the year. The CMA has already used the new rules, designed to protect consumers and businesses from unfair practices by Big Tech companies, to open an investigation into Google’s search ads business. 

The new investigations will examine whether Apple or Google’s mobile operating systems, app stores and browsers give either company a strategic position in the market. The watchdog said it’s interested in the level of competition and any barriers preventing rivals from offering competing products and services. 

The CMA will also look into whether Apple or Google are favoring their own apps and services, which it said “often come pre-installed and prominently placed on iOS and Android devices.” Google’s YouTube and Apple’s Safari browser are two examples of apps that come bundled with Android and iOS, respectively. 

And it will investigate “exploitative conduct,” such as whether Apple or Google forces app makers to agree to “unfair terms and conditions” as condition for distributing apps on their app stores. 

The regulator has until October to wrap up the investigation. It said it could force either company to, for example, open up access to key functions other apps need to operate on mobile devices. Or it could force them to allow users to download apps outside of their own app stores. 

Both Google and Apple said the work “constructively” with the U.K. regulator on the investigation. 

Google said “Android’s openness has helped to expand choice, reduce prices and democratize access to smartphones and apps. It’s the only example of a successful and viable open source mobile operating system.” 

The company said it favors “a way forward that avoids stifling choice and opportunities for U.K. consumers and businesses alike, and without risk to U.K. growth prospects.” 

Apple said it “believes in thriving and dynamic markets where innovation can flourish. We face competition in every segment and jurisdiction where we operate, and our focus is always the trust of our users.”

your ads here!

Trump signals aggressive stance as US races China in AI development

Before he had been in office for 48 hours, President Donald Trump sent a clear signal that to outpace China, his administration will be pursuing an aggressive agenda when it comes to pushing the United States forward on the development of artificial intelligence and the infrastructure that powers it.

On his first day in office, Trump rescinded an executive order signed in 2023 by former President Joe Biden that sought to place some guardrails around the development of more and more powerful generative AI tools and to create other protections for privacy, civil rights and national security.

The following day, Trump met with the leaders of several leading technology firms, including Sam Altman, CEO of Open AI; Larry Ellison, chairman of Oracle; and Masayoshi Son, CEO of SoftBank, to announce a $500 billion private sector investment in AI infrastructure known as Stargate.

“Beginning immediately, Stargate will be building the physical and virtual infrastructure to power the next generation of advancements in AI, and this will include the construction of colossal data centers,” Trump said in a media event at the White House on Tuesday.

Specifically, Stargate will invest in the creation of as many as 10 huge data centers in the United States that will provide the computing for artificial intelligence systems. The first data center is already under construction in Texas. The massive private sector investment will create up to 100,000 U.S. jobs, the executives said.

Keeping AI in the US

“What we want to do is, we want to keep it in this country,” Trump said. “China is a competitor, and others are competitors. We want it to be in this country, and we’re making it available. I’m going to help a lot through emergency declarations, because we have an emergency. We have to get this stuff built.”

The assembled tech leaders took the opportunity to praise the new president.

“I think this will be the most important project of this era,” Altman said. “We wouldn’t be able to do this without you, Mr. President.”

Janet Egan, a senior fellow in the technology and national security program at the Center for a New American Security, said that all the signals Trump is sending indicate he is serious about maintaining the United States’ current advantages in the development of advanced AI.

“I think this shows that he’s going to have a really clear mind as to how to partner closely with the private sector to enable them to speed up and run fast,” Egan said. “We’ve also seen him take direct action on some of the bottlenecks that are impeding the development of AI infrastructure in the U.S., and a particular focus is energy.”

OpenAI, the creator of ChatGPT, has relied on Microsoft data centers for its computing. The firm reportedly discussed with the Biden administration the regulatory hurdles of planning and permitting when building data centers.

In a policy paper released earlier this month, OpenAI cited the competition with China, laying out its policy proposals to “extending America’s global leadership in AI innovation.”

“Chips, data, energy and talent are the keys to winning on AI — and this is a race America can and must win,” the paper said. “There’s an estimated $175 billion sitting in global funds awaiting investment in AI projects, and if the U.S. doesn’t attract those funds, they will flow to China-backed projects — strengthening the Chinese Communist Party’s global influence.”

Patrick Hedger, director of policy at NetChoice, a technology trade association, told VOA that the Stargate announcement “immediately signaled to me that private capital is more than willing to come off the sidelines these days with the new Trump administration.”

As part of his flurry of executive actions on Monday, Trump eliminated several preexisting executive orders placing limits on fossil fuel extraction and power generation. In the White House event on Monday, Trump also noted that AI data centers consume vast amounts of electricity and said he would be clearing the way for Stargate and other private companies to invest in new energy generation projects.

China competition

While Trump eliminated many of Biden’s executive orders immediately on Monday, he does not appear to have taken action against some of the former president’s other AI-related initiatives. Last year, Biden took several steps to restrict China’s access to cutting-edge technology related to AI, specifically, restricting the ability of companies that sell advanced semiconductors and the machinery used to produce them to Chinese firms.

On that issue, Egan said, Trump and Biden appear to be on the same page.

“I think it’s important to also note the continuity in how Trump’s approaching AI,” she said. “He, too, sees it as a national security risk and national security imperative. … So, I think we should expect to see this run-fast approach to AI complemented by continued efforts to understand and manage emerging risks. Particularly cyber, nuclear, biological risks, as well as a more muscular approach to export controls and enforcement.”

Speed and safety

Louis Rosenberg, CEO and chief scientist at Unanimous AI and a prominent figure in the field for decades, told VOA he thinks there is a bipartisan consensus that AI needs to be developed speedily but also responsibly.

“At the highest level, the accelerating risks around frontier AI is not a partisan issue,” he wrote in an email exchange. “Both parties realize that significant safeguards will be needed as AI gets increasingly intelligent and flexible, especially as autonomous AI agents get released at large scale.”

Rosenberg said the most significant question is how the U.S. can remain the global leader in AI development while making sure the systems that are deployed are safe and reliable.

“I suspect the Trump administration will address AI risks by deploying its own targeted policies that are not as broad as the Biden executive order was but can address real threats much faster,” he wrote. “The Biden executive order was very useful in raising the alarm about AI, but from a practical perspective it did not provide meaningful protections from the important emerging risks.

“Ultimately we need to find a way to move fast on AI development and move fast on AI protection. We need speed on both fronts,” Rosenberg said.

VOA Silicon Valley bureau chief Michelle Quinn contributed to this report.

your ads here!