Economy

Economy news. Economy refers to the system of production, distribution, and consumption of goods and services within a society. It encompasses everything from individual spending and business operations to government policies and international trade. The economy is influenced by numerous factors, including supply and demand, inflation, employment rates, and fiscal policies

AI decodes oinks and grunts to keep pigs happy in Danish study

VIPPEROD, Denmark — European scientists have developed an artificial intelligence algorithm capable of interpreting pig sounds, aiming to create a tool that can help farmers improve animal welfare.

The algorithm could potentially alert farmers to negative emotions in pigs, thereby improving their well-being, according to Elodie Mandel-Briefer, a behavioral biologist at University of Copenhagen who is co-leading the study.

The scientists, from universities in Denmark, Germany, Switzerland, France, Norway and the Czech Republic, used thousands of recorded pig sounds in different scenarios, including play, isolation and competition for food, to find that grunts, oinks, and squeals reveal positive or negative emotions.

While many farmers already have a good understanding of the well-being of their animals by watching them in the pig pen, existing tools mostly measure their physical condition, said Mandel-Briefer.

“Emotions of animals are central to their welfare, but we don’t measure it much on farms,” she said.

The algorithm demonstrated that pigs kept in outdoor, free-range or organic farms with the ability to roam and dig in the dirt produced fewer stress calls than conventionally raised pigs. The researchers believe that this method, once fully developed, could also be used to label farms, helping consumers make informed choices.

“Once we have the tool working, farmers can have an app on their phone that can translate what their pigs are saying in terms of emotions,” Mandel-Briefer said.

Short grunts typically indicate positive emotions, while long grunts often signal discomfort, such as when pigs push each other by the trough. High-frequency sounds like screams or squeals usually mean the pigs are stressed, for instance, when they are in pain, fight, or are separated from each other.

The scientists used these findings to create an algorithm that employs AI.

“Artificial intelligence really helps us to both process the huge amount of sounds that we get, but also to classify them automatically,” Mandel-Briefer said.

your ads here!

China space plan highlights commitment to space exploration, analysts say

Chinese officials recently released a 25-year space exploration plan that details five major scientific themes and 17 priority areas for scientific breakthroughs with one goal: to make China a world leader in space by 2050 and a key competitor with the U.S. in space, for decades to come.

Last week, the Chinese Academy of Sciences, the China National Space Administration, and the China Manned Space Agency jointly released a space exploration plan for 2024 through 2050.

It includes searching for extraterrestrial life, exploring Mars, Venus, and Jupiter, sending space crews to the moon and building an international lunar research station by 2025.

Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies, says the plan highlights China’s long-term commitment and answers some lingering questions as well.

“I think a lot of experts have wondered if China would continue to invest in space, particularly in science and exploration, given a lot of economic uncertainties in China … but this is a sign that they’re committed,” Swope said.

The plan reinforces a “commitment to really look at space science and exploration in the long term and not just short term,” he added.

The plan outlines Beijing’s goal to send astronauts to the moon by 2030, obtain and retrieve the first samples from Mars and successfully complete a mission to the Jupiter system in the next few years. It also outlines three phases of development, each with specific goals in terms of space exploration and key scientific discoveries.

The extensive plan is not only a statement that Beijing can compete with the U.S. in high-tech industries, it is also a way of boosting national pride, analysts say. 

“Space in particular has a huge public awareness, public pride,” says Nicholas Eftimiades, a retired senior intelligence officer and senior fellow at the Atlantic Council, a Washington-based think tank. “It emboldens the Chinese people, gives them a strong sense of nationalism and superiority, and that’s what the main focus of the Bejing government is.”

 

Swope agrees.

“I think it’s [China’s long-term space plan] a manifestation of China’s interest and desire from a national prestige and honor standpoint to really show that it’s a player on the international stage up there with the United States,” he said.

Antonia Hmaidi, a senior analyst at the Mercator Institute for China Studies, told VOA in an email response that, “China’s space focus goes back to the 1960,” and that “China has also been very successful at meeting its own goals and timelines.”

In recent years China has carried out several successful space science missions including Chang’e-4, which marked the world’s first soft landing and roving on the far side of the moon, Change’e-5, a mission that returned a sample from the moon back to Beijing for the first time, and Tianwen-1, a space mission that resulted in Chinese spacecraft leaving imprints on Mars. 

 

In addition, to these space missions, Bejing has implemented several programs aimed at increasing scientific discovery relating to space, particularly through the launch of several space satellites. 

Since 2011, China has developed and launched scientific satellites including Dark Matter Particle Explorer, Quantum Experiments at Space Scale, Advanced Space-based Solar Observatory, and the Einstein Probe.

While China continues to make progress with space exploration and scientific discovery, according to Swope, there is still a way to go before it catches up to the United States.

“China is undeniably the number 2 space power in the world today, behind the United States,” he said. “The United States is still by far the most important in a lot of measures and metrics, including in science and exploration.”

Eftimiades said one key reason the United States has maintained its lead in the space race is the success of Washington’s private, commercial aerospace companies.

 

“The U.S. private industry has got the jump on China,” Eftimiades said. “There’s no type of industrial control, industrial plan. In fact, Congress and administration shy away from that completely.”

Unlike the United States, large space entities in China are often state-owned, such as the China Aerospace Cooperation, Eftimiades said.

He adds that one advantage of China’s space entities being state-owned is the ability for the Chinese government to “direct their industries toward specific objectives.” At the same time, having bureaucracy involved with state-owned enterprises leads to less “cutting-edge technology.”

This year, China has focused on growing its space presence relative to the U.S. by conducting more orbital launches. 

Beijing planned to conduct 100 orbital launches this year, according to the state-owned China Aerospace Science and Technology Corporation, which was to conduct 70 of them. However, as of October 15, China had completed 48 orbital launches.

Last week, SpaceX announced it had launched its 100th rocket of the year and had another liftoff just hours later. The private company is aiming for 148 launches this year.

Earlier this year the U.S. Department of Defense implemented its first Commercial Space Integration Strategy, which outlined the department’s efforts to take technologies produced in the private sector and apply their uses for U.S. national security purposes.

In a statement released relating to the U.S. strategic plan, the Department of Defense explained its strategy to work closely with private and commercial sector space companies that are known to be innovative and have scalable production.

According to the statement, officials say “the strategy is based on the premise that the commercial space sector’s innovative capabilities, scalable production and rapid technology refresh rates provide pathways to enhance the resilience of DOD space capabilities and strengthen deterrence.”

Many space technologies have military applications, Swope said.

 

“A lot of things that are done in space have a dual use, so [space technologies] may be primarily used for scientific purposes, but also could be used to design and build and test some type of weapons technology,” Swope said.

Hmaidi says China’s newest space plan stands out for what it doesn’t have.

“The most interesting and striking part about China’s newest space plan to me was the narrow focus on basic science over military goals,” she told VOA in an email. “However, we know from open-source research that China is also very active in military space development.”

“This plan contains only one part of China’s space planning, namely the part that is unlikely to have direct military utility, while not mentioning other missions with direct military utility like its low-earth orbit internet program,” Hmaidi explained.

your ads here!

Chinese official urges Apple to continue ‘deepening’ presence in China

A top Chinese official has urged tech giant Apple to deepen its presence and investment in innovation in the world’s second largest economy at a time when supply chains and companies are shifting production and operations away from China.

As U.S.-China geopolitical tensions simmer and tech competition between Beijing and Western countries intensifies, foreign investment in China shrunk in 2023 to its lowest level in three decades, according to government statistics.

The United States has banned the export of advanced technology to China and Beijing’s crackdown on spying in the name of national security concerns has spooked investors.

On Wednesday, Jin Zhuanglong – China’s Minister for Industry and Information Technology – told Apple CEO Tim Cook he hoped that, “Apple will continue to deepen its presence in the Chinese market,” urging Cook to “increase investment in innovation, grow alongside Chinese firms, and share in the dividends of high-quality investment,” according to a ministry statement.

At the meeting Jin also discussed “Apple’s development in China, network data security management, (and) cloud services,” according to the statement.

China has the world’s largest market for smartphones, and Apple is a leading competitor. However, increasingly the iPhone producer has lost market share in the country due to an increasing number of local rivals in the smartphone sector.

In the second quarter of this year, AFP reports that Apple ranked sixth among smartphone vendors in China, holding a 16% market share, marking a drop of three positions compared to its ranking during the same period last year, according to analysis firm Canalys.

Jin also repeated a frequent pledge from officials in Beijing that China would strive to provide a “better environment” for global investors and “continue to expand high-level opening up.

Cook’s trip to China was his second of the year. His posts on the X-like Chinese social media platform Weibo showed he visited an Apple store in downtown Beijing, visited an organic farm, and toured ancient neighborhoods with prominent artists such as local photographer Chen Man.

Cook added that he met with students from China’s Agricultural University and Zhejiang University to receive feedback on how iPhones and iPads can help farmers adopt more sustainable practices. 

Some information in this report came from Reuters and AFP.

your ads here!

‘Garbage in, garbage out’: AI fails to debunk disinformation, study finds

Washington — When it comes to combating disinformation ahead of the U.S. presidential elections, artificial intelligence and chatbots are failing, a media research group has found.

The latest audit by the research group NewsGuard found that generative AI tools struggle to effectively respond to false narratives.

In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time and offer a nonresponse 38.33% of the time — leading to a “fail rate” of almost 40%, according to NewsGuard.

“These chatbots clearly struggle when it comes to handling prompt inquiries related to news and information,” said McKenzie Sadeghi, the audit’s author. “There’s a lot of sources out there, and the chatbots might not be able to discern between which ones are reliable versus which ones aren’t.”

NewsGuard has a database of false news narratives that circulate, encompassing global wars and U.S. politics, Sadeghi told VOA.

Every month, researchers feed trending false narratives into leading chatbots in three different forms: innocent user prompts, leading questions and “bad actor” prompts. From there, the researchers measure if AI repeats, fails to respond or debunks the claims.

AI repeats false narratives mostly in response to bad actor prompts, which mirror the tactics used by foreign influence campaigns to spread disinformation. Around 70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.

Foreign influence campaigns are able to take advantage of such flaws, according to the Office of the Director of National Intelligence. Russia, Iran and China have used generative AI to “boost their respective U.S. election influence efforts,” according to an intelligence report released last month.

As an example of how easily AI chatbots can be misled, Sadeghi cited a NewsGuard study in June that found AI would repeat Russian disinformation if it “masqueraded” as coming from an American local news source.

From myths about migrants to falsehoods about FEMA, the spread of disinformation and misinformation has been a consistent theme throughout the 2024 election cycle.

“Misinformation isn’t new, but generative AI is definitely amplifying these patterns and behaviors,” Sejin Paik, an AI researcher at Georgetown University, told VOA.

Because the technology behind AI is constantly changing and evolving, it is often unable to detect erroneous information, Paik said. This leads to not only issues with the factuality of AI’s output, but also the consistency.

NewsGuard also found that two-thirds of “high quality” news sites block generative AI models from using their media coverage. As a result, AI often has to learn from lower-quality, misinformation-prone news sources, according to the watchdog.

This can be dangerous, experts say. Much of the non-paywalled media that AI trains on is either “propaganda” or “deliberate strategic communication,” media scholar Matt Jordan told VOA.

“AI doesn’t know anything: It doesn’t sift through knowledge, and it can’t evaluate claims,” Jordan, a media professor at Penn State, told VOA. “It just repeats based on huge numbers.”

AI has a tendency to repeat “bogus” news because statistically, it tends to be trained on skewed and biased information, he added. He called this a “garbage in, garbage out model.”

NewsGuard aims to set the standard for measuring accuracy and trustworthiness in the AI industry through monthly surveys, Sadeghi said.

The sector is growing fast, even as issues around disinformation are flagged. The generative AI industry has experienced monumental growth in the past few years. OpenAI’s ChatGPT currently reports 200 million weekly users, more than double from last year, according to Reuters.

The growth in popularity of these tools leads to another problem in their output, according to Anjana Susarla, a professor in Responsible AI at Michigan State University. Since there is such a high quantity of information going in — from users and external sources — it is hard to detect and stop the spread of misinformation.

Many users are still willing to believe the outputs of these chatbots are true, Susarla said.

“Sometimes, people can trust AI more than they trust human beings,” she told VOA.

The solution to this may be bipartisan regulation, she added. She hopes that the government will encourage social media platforms to regulate malicious misinformation.

Jordan, on the other hand, believes the solution is with media audiences.

“The antidote to misinformation is to trust in reporters and news outlets instead of AI,” he told VOA. “People sometimes think that it’s easier to trust a machine than it is to trust a person. But in this case, it’s just a machine spewing out what untrustworthy people have said.”

your ads here!

Microsoft to allow autonomous AI agent development starting next month

Microsoft will allow customers to build autonomous artificial intelligence agents starting in November, the software giant said on Monday, in its latest move to tap the booming technology.

The company is positioning autonomous agents — programs which require little human intervention unlike chatbots — as “apps for an AI-driven world,” capable of handling client inquiries, identifying sales leads and managing inventory.

Other big technology firms such as Salesforce have also touted the potential of such agents, tools that some analysts say could provide companies with an easier path to monetizing the billions of dollars they are pouring into AI.

Microsoft said its customers can use Copilot Studio – an application that requires little knowledge of computer code – to create autonomous agents in public preview from November. It is using several AI models developed in-house and by OpenAI for the agents.

The company is also introducing ten ready-for-use agents that can help with routine tasks ranging from managing supply chain to expense tracking and client communications.

In one demo, McKinsey & Co, which had early access to the tools, created an agent that can manage client inquires by checking interaction history, identifying the consultant for the task and scheduling a follow-up meeting.

“The idea is that Copilot [the company’s chatbot] is the user interface for AI,” Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, told Reuters.

“Every employee will have a Copilot, their personalized AI agent, and then they will use that Copilot to interface and interact with the sea of AI agents that will be out there.”

Tech giants are facing investor pressure to show returns on their significant AI investments. Microsoft’s shares fell 2.8% in the September quarter, underperforming the S&P 500, but remain more than 10% higher for the year.

Some concerns have risen in recent months about the pace of Copilot adoption, with research firm Gartner saying in August its survey of 152 IT organizations showed that the vast majority had not progressed their Copilot initiatives past the pilot stage.

your ads here!

Tiny Caribbean island of Anguilla turns AI boom into digital gold mine

The artificial intelligence boom has benefited chatbot makers, computer scientists and Nvidia investors. It’s also providing an unusual windfall for Anguilla, a tiny island in the Caribbean.

ChatGPT’s debut nearly two years ago heralded the dawn of the AI age and kicked off a digital gold rush as companies scrambled to stake their own claims by acquiring websites that end in .ai.

That’s where Anguilla comes in. The British territory was allotted control of the .ai internet address in the 1990s. It was one of hundreds of obscure top-level domains assigned to individual countries and territories based on their names. While the domains are supposed to indicate a website has a link to a particular region or language, it’s not always a requirement.

Google uses google.ai to showcase its artificial intelligence services while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Startups like AI search engine Perplexity have also snapped up .ai web addresses, redirecting users from the .com version.

Anguilla’s earnings from web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI. The income now accounts for about 20% of Anguilla’s total government revenue. Before the AI boom, it hovered at around 5%.

Anguilla’s government, which uses the gov.ai home page, collects a fee every time an .ai web address is renewed. The territory signed a deal Tuesday with a U.S. company to manage the domains amid explosive demand but the fees aren’t expected to change. It also gets paid when new addresses are registered and expired ones are sold off. Some sites have fetched tens of thousands of dollars.

The money directly boosts the economy of Anguilla, which is just 91 square kilometers and has a population of about 16,000. Blessed with coral reefs, clear waters and palm-fringed white sand beaches, the island is a haven for uber-wealthy tourists. Still, many residents are underprivileged, and tourism has been battered by the pandemic and, before that, a powerful hurricane.

Anguilla doesn’t have its own AI industry though Premier Ellis Webster hopes that one day it will become a hub for the technology. He said it was just luck that it was Anguilla, and not nearby Antigua, that was assigned the .ai domain in 1995 because both places had those letters in their names.

Webster said the money takes the pressure off government finances and helps fund key projects but cautioned that “we can’t rely on it solely.”

“You can’t predict how long this is going to last,” Webster said in an interview with the AP. “And so I don’t want to have our economy and our country and all our programs just based on this. And then all of a sudden there’s a new fad comes up in the next year or two, and then we are left now having to make significant expenditure cuts, removing programs.”

To help keep up with the explosive growth in domain registrations, Anguilla said Tuesday it’s signing a deal with a U.S.-based domain management company, Identity Digital, to help manage the effort. They said the agreement will mean more revenue for the government while improving the resilience and security of the web addresses.

Identity Digital, which also manages Australia’s .au domain, expects to migrate all .ai domain services to its systems by the start of next year, Identity Digital Chief Strategy Officer Ram Mohan said in an interview.

A local software entrepreneur had previously helped Anguilla set up its registry system decades earlier.

There are now more than 533,000 .ai web domains, an increase of more than 10-fold since 2018. The International Monetary Fund said in a May report that the earnings will help diversify the economy, “thus making it more resilient to external shocks.

Webster expects domain-related revenues to rise further and could even double this year from last year’s $32 million.

He said the money will finance the airport’s expansion, free medical care for senior citizens and completion of a vocational technology training center at Anguilla’s high school.

The income also provides “budget support” for other projects the government is eyeing, such as a national development fund it could quickly tap for hurricane recovery efforts. The island normally relies on assistance from its administrative power, Britain, which comes with conditions, Webster said.

Mohan said working with Identity Digital will also defend against cyber crooks trying to take advantage of the hype around artificial intelligence.

He cited the example of Tokelau, an island in the Pacific Ocean, whose .tk addresses became notoriously associated with spam and phishing after outsourcing its registry services.

“We worry about bad actors taking something, sticking a .ai to it, and then making it sound like they are much bigger or much better than what they really are,” Mohan said, adding that the company’s technology will quickly take down shady sites.

Another benefit is .AI websites will no longer need to connect to the government’s digital infrastructure through a single internet cable to the island, which leaves them vulnerable to digital bottlenecks or physical disruptions.

Now they’ll use the company’s servers distributed globally, which means it will be faster to access them because they’ll be closer to users.

“It goes from milliseconds to microseconds,” Mohan said.

your ads here!

Drone maker DJI sues Pentagon over Chinese military listing

WASHINGTON — China-based DJI sued the U.S. Defense Department on Friday for adding the drone maker to a list of companies allegedly working with Beijing’s military, saying the designation is wrong and has caused the company significant financial harm.

DJI, the world’s largest drone manufacturer that sells more than half of all U.S. commercial drones, asked a U.S. District Judge in Washington to order its removal from the Pentagon list designating it as a “Chinese military company,” saying it “is neither owned nor controlled by the Chinese military.”

Being placed on the list represents a warning to U.S. entities and companies about the national security risks of conducting business with them.

DJI’s lawsuit says because of the Defense Department’s “unlawful and misguided decision” it has “lost business deals, been stigmatized as a national security threat, and been banned from contracting with multiple federal government agencies.”

The company added “U.S. and international customers have terminated existing contracts with DJI and refuse to enter into new ones.”

The Defense Department did not immediately respond to a request for comment.

DJI said on Friday it filed the lawsuit after the Defense Department did not engage with the company over the designation for more than 16 months, saying it “had no alternative other than to seek relief in federal court.”

Amid strained ties between the world’s two biggest economies, the updated list is one of numerous actions Washington has taken in recent years to highlight and restrict Chinese companies that it says may strengthen Beijing’s military.

Many major Chinese firms are on the list, including aviation company AVIC, memory chip maker YMTC, China Mobile 0941.HK, and energy company CNOOC.

In May, lidar manufacturer Hesai Group ZN80y.F filed a suit challenging the Pentagon’s Chinese military designation for the company. On Wednesday, the Pentagon removed Hesai from the list but said it will immediately relist the China-based firm on national security grounds.

DJI is facing growing pressure in the United States.

Earlier this week DJI told Reuters that Customs and Border Protection is stopping imports of some DJI drones from entering the United States, citing the Uyghur Forced Labor Prevention Act.

DJI said no forced labor is involved at any stage of its manufacturing.

U.S. lawmakers have repeatedly raised concerns that DJI drones pose data transmission, surveillance and national security risks, something the company rejects.

Last month, the U.S. House voted to bar new drones from DJI from operating in the U.S. The bill awaits U.S. Senate action. The Commerce Department said last month it is seeking comments on whether to impose restrictions on Chinese drones that would effectively ban them in the U.S. — similar to proposed Chinese vehicle restrictions. 

your ads here!

Residents on Kenya’s coast use app to track migratory birds

The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.

your ads here!

Deepfakes featuring deceased terrorists spread radical propaganda

In a year with over 60 national elections worldwide, concerns are high that individuals and entities are using deepfake images and recordings to contribute to the flood of election misinformation. VOA’s Rio Tuasikal reports on some potentially dangerous videos made using generative AI.

your ads here!

US prosecutors see rising threat of AI-generated child sex abuse imagery

U.S. federal prosecutors are stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images, as law enforcement fears the technology could spur a flood of illicit material.

The U.S. Justice Department has brought two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.

“There’s more to come,” said James Silver, the chief of the Justice Department’s Computer Crime and Intellectual Property Section, predicting further similar cases.

“What we’re concerned about is the normalization of this,” Silver said in an interview. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”

The rise of generative AI has sparked concerns at the Justice Department that the rapidly advancing technology will be used to carry out cyberattacks, boost the sophistication of cryptocurrency scammers and undermine election security. 

Child sex abuse cases mark some of the first times that prosecutors are trying to apply existing U.S. laws to alleged crimes involving AI, and even successful convictions could face appeals as courts weigh how the new technology may alter the legal landscape around child exploitation. 

Prosecutors and child safety advocates say generative AI systems can allow offenders to morph and sexualize ordinary photos of children and warn that a proliferation of AI-produced material will make it harder for law enforcement to identify and locate real victims of abuse.

The National Center for Missing and Exploited Children, a nonprofit group that collects tips about online child exploitation, receives an average of about 450 reports each month related to generative AI, according to Yiota Souras, the group’s chief legal officer.

That’s a fraction of the average of 3 million monthly reports of overall online child exploitation the group received last year.

Untested ground

Cases involving AI-generated sex abuse imagery are likely to tread new legal ground, particularly when an identifiable child is not depicted.

Silver said in those instances, prosecutors can charge obscenity offenses when child pornography laws do not apply.

Prosecutors indicted Steven Anderegg, a software engineer from Wisconsin, in May on charges including transferring obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit conduct and sharing some of those images with a 15-year-old boy, according to court documents.

Anderegg has pleaded not guilty and is seeking to dismiss the charges by arguing that they violate his rights under the U.S. Constitution, court documents show.

He has been released from custody while awaiting trial. His attorney was not available for comment.

Stability AI, the maker of Stable Diffusion, said the case involved a version of the AI model that was released before the company took over the development of Stable Diffusion. The company said it has made investments to prevent “the misuse of AI for the production of harmful content.”

Federal prosecutors also charged a U.S. Army soldier with child pornography offenses in part for allegedly using AI chatbots to morph innocent photos of children he knew to generate violent sexual abuse imagery, court documents show.

The defendant, Seth Herrera, pleaded not guilty and has been ordered held in jail to await trial. Herrera’s lawyer did not respond to a request for comment.

Legal experts said that while sexually explicit depictions of actual children are covered under child pornography laws, the landscape around obscenity and purely AI-generated imagery is less clear. 

The U.S. Supreme Court in 2002 struck down as unconstitutional a federal law that criminalized any depiction, including computer-generated imagery, appearing to show minors engaged in sexual activity. 

“These prosecutions will be hard if the government is relying on the moral repulsiveness alone to carry the day,” said Jane Bambauer, a law professor at the University of Florida who studies AI and its impact on privacy and law enforcement.

Federal prosecutors have secured convictions in recent years against defendants who possessed sexually explicit images of children that also qualified as obscene under the law. 

Advocates are also focusing on preventing AI systems from generating abusive material. 

Two nonprofit advocacy groups, Thorn and All Tech Is Human, secured commitments in April from some of the largest players in AI including Alphabet’s Google, Amazon.com, Facebook and Instagram parent Meta Platforms, OpenAI and Stability AI to avoid training their models on child sex abuse imagery and to monitor their platforms to prevent its creation and spread. 

“I don’t want to paint this as a future problem, because it’s not. It’s happening now,” said Rebecca Portnoff, Thorn’s director of data science.

“As far as whether it’s a future problem that will get completely out of control, I still have hope that we can act in this window of opportunity to prevent that.”

your ads here!

Watchdog: ‘Serious questions’ over Meta’s handling of anti-immigrant posts

Meta’s independent content watchdog said Thursday there were “serious questions” about how the social media giant deals with anti-immigrant content, particularly in Europe. 

The Oversight Board, established by Meta in 2020 and sometimes called its “supreme court,” launched a probe after seeing a “significant number” of appeals over anti-immigrant content. 

The board has chosen two symbolic cases — one from Germany and the other from Poland — to assess whether Meta, which owns Facebook and Instagram, is following human rights law and its own policies on hate speech. 

Helle Thorning-Schmidt, co-chair of the board and a former Danish prime minister, said it was “critical” to get the balance right between free speech and protection of vulnerable groups. 

“The high number of appeals we get on immigration-related content from across the EU tells us there are serious questions to ask about how the company handles issues related to this, including the use of coded speech,” she said in a statement. 

The first piece of content to be assessed by the board was posted in May on a Facebook page claiming to be the official account of Poland’s far-right Confederation party. 

An image depicts Polish Prime Minister Donald Tusk looking through a peephole with a black man approaching him from behind, accompanied by text suggesting his government would allow immigration to surge. 

Meta rejected an appeal from a user to take down the post despite the text including a word considered by some as a racial slur. 

In the other case, an apparently AI-generated image was posted on a German Facebook page showing a blond-haired blue-eyed woman, a German flag and a stop sign. 

The accompanying text likens immigrants to “gang rape specialists.”  

A user complained but Meta decided to not to remove the post.  

“The board selected these cases to address the significant number of appeals, especially from Europe, against content that shares views on immigration in ways that may be harmful towards immigrants,” the watchdog said in a statement. 

The board said it wanted to hear from the public and would spend “the next few weeks” discussing the issue before publishing its decision. 

Decisions by the board, funded by a trust set up by Meta, are not binding, though the company has promised to follow its rulings. 

your ads here!

China says unidentified foreign company conducted illegal mapping services 

BEIJING — China’s state security ministry said that a foreign company had been found to have illegally conducted geographic mapping activities in the country under the guise of autonomous driving research and outsourcing to a licensed Chinese mapping firm.

The ministry did not disclose the names of either company in a statement on its WeChat account on Wednesday.

The foreign company, ineligible for geographic surveying and mapping activities in China, “purchased a number of cars and equipped them with high-precision radar, GPS, optical lenses and other gear,” read the statement.

In addition to directly instructing the Chinese company to conduct surveying and mapping in many Chinese provinces, the foreign company appointed foreign technicians to give “practical guidance” to mapping staffers with the Chinese firm, enabling the latter to transfer its acquired data overseas, the ministry alleged.

Most of the data the foreign company has collected have been determined to be state secrets, according to the ministry, which said state security organs, together with relevant departments, had carried out joint law enforcement activities.

The affected companies and relevant responsible personnel have been held legally accountable, the state security ministry said, without elaborating.

China has strictly regulated mapping activities and data, which are key to developing autonomous driving, due to national security concerns. No foreign firm is qualified for mapping in China and data collected by vehicles made by foreign automakers such as Tesla in China has to be stored locally.

The U.S. Commerce Department has also proposed prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns.

Also on Wednesday, a Chinese cybersecurity industry group recommended that Intel products sold in China should be subject to a security review, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests.

your ads here!

EU AI Act checker reveals Big Tech’s compliance pitfalls

LONDON — Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cybersecurity resilience and discriminatory output, according to data seen by Reuters.

The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022. The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around “general-purpose” AIs.

Now a new tool designed by Swiss startup LatticeFlow and partners, and supported by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories in line with the bloc’s wide-sweeping AI Act, which is coming into effect in stages over the next two years.

Awarding each model a score between 0 and 1, a leaderboard published by LatticeFlow on Wednesday showed models developed by Alibaba, Anthropic, OpenAI, Meta and Mistral all received average scores of 0.75 or above.

However, the company’s “Large Language Model (LLM) Checker” uncovered some models’ shortcomings in key areas, spotlighting where companies may need to divert resources in order to ensure compliance.

Companies failing to comply with the AI Act will face fines of $38 million or 7% of global annual turnover.

Mixed results

At present, the EU is still trying to establish how the AI Act’s rules around generative AI tools like ChatGPT will be enforced, convening experts to craft a code of practice governing the technology by spring 2025.

But LatticeFlow’s test, developed in collaboration with researchers at Swiss university ETH Zurich and Bulgarian research institute INSAIT, offers an early indicator of specific areas where tech companies risk falling short of the law.

For example, discriminatory output has been a persistent issue in the development of generative AI models, reflecting human biases around gender, race and other areas when prompted.

When testing for discriminatory output, LatticeFlow’s LLM Checker gave OpenAI’s “GPT-3.5 Turbo” a relatively low score of 0.46. For the same category, Alibaba Cloud’s 9988.HK “Qwen1.5 72B Chat” model received only a 0.37.

Testing for “prompt hijacking,” a type of cyberattack in which hackers disguise a malicious prompt as legitimate to extract sensitive information, the LLM Checker awarded Meta’s “Llama 2 13B Chat” model a score of 0.42. In the same category, French startup Mistral’s “8x7B Instruct” model received 0.38.

“Claude 3 Opus,” a model developed by Google-backed Anthropic, received the highest average score, 0.89.

The test was designed in line with the text of the AI Act, and will be extended to encompass further enforcement measures as they are introduced. LatticeFlow said the LLM Checker would be freely available for developers to test their models’ compliance online.

Petar Tsankov, the firm’s CEO and cofounder, told Reuters the test results were positive overall and offered companies a roadmap for them to fine-tune their models in line with the AI Act.

“The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models,” he said. “With a greater focus on optimizing for compliance, we believe model providers can be well-prepared to meet regulatory requirements.”

Meta declined to comment. Alibaba, Anthropic, Mistral, and OpenAI did not immediately respond to requests for comment.

While the European Commission cannot verify external tools, the body has been informed throughout the LLM Checker’s development and described it as a “first step” in putting the new laws into action.

A spokesperson for the European Commission said: “The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements.”

your ads here!

Chinese cyber association calls for review of Intel products sold in China 

BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests. 

While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC). 

“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said. 

Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review. 

Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.  

 

your ads here!

‘Age of electricity’ to follow looming fossil fuel peak, IEA says

LONDON — The world is on the brink of a new age of electricity with fossil fuel demand set to peak by the end of the decade, meaning surplus oil and gas supplies could drive investment into green energy, the International Energy Agency said on Wednesday.

But it also flagged a high level of uncertainty as conflicts embroil the oil and gas-producing Middle East and Russia and as countries representing half of global energy demand have elections in 2024.

“In the second half of this decade, the prospect of more ample – or even surplus – supplies of oil and natural gas, depending on how geopolitical tensions evolve, would move us into a very different energy world,” IEA Executive Director Fatih Birol said in a release alongside its annual report.

Surplus fossil fuel supplies would likely lead to lower prices and could enable countries to dedicate more resources to clean energy, moving the world into an “age of electricity,” Birol said.

In the nearer term, there is also the possibility of reduced supplies should the Middle East conflict disrupt oil flows.

The IEA said such conflicts highlighted the strain on the energy system and the need for investment to speed up the transition to “cleaner and more secure technologies.”

A record-high level of clean energy came online globally last year, the IEA said, including more than 560 gigawatts (GW) of renewable power capacity. Around $2 trillion is expected to be invested in clean energy in 2024, almost double the amount invested in fossil fuels.

In its scenario based on current government policies, global oil demand peaks before 2030 at just less than 102 million barrels/day (mb/d), and then falls back to 2023 levels of 99 mb/d by 2035, largely because of lower demand from the transport sector as electric vehicle use increases.

The report also lays out the likely impact on future oil prices if stricter environmental policies are implemented globally to combat climate change.

In the IEA’s current policies scenario, oil prices decline to $75 per barrel in 2050 from $82 per barrel in 2023.

That compares to $25 per barrel in 2050 should government actions fall in line with the goal of cutting energy sector emissions to net zero by then.

Although the report forecasts an increase in demand for liquefied natural gas (LNG) of 145 billion cubic meters (bcm) between 2023 and 2030, it said this would be outpaced by an increase in export capacity of around 270 bcm over the same period.

“The overhang in LNG capacity looks set to create a very competitive market at least until this is worked off, with prices in key importing regions averaging $6.5-8 per million British thermal units (mmBtu) to 2035,” the report said.

Asian LNG prices, regarded as an international benchmark are currently around $13 mmBtu.

your ads here!

Tech firms increasingly look to nuclear power for data center

As energy-hungry computer data centers and artificial intelligence programs place ever greater demands on the U.S. power grid, tech companies are looking to a technology that just a few years ago appeared ready to be phased out: nuclear energy. 

After several decades in which investment in new nuclear facilities in the U.S. had slowed to a crawl, tech giants Microsoft and Google have recently announced investments in the technology, aimed at securing a reliable source of emissions-free power for years into the future.  

Earlier this year, online retailer Amazon, which has an expansive cloud computing business, announced it had reached an agreement to purchase a nuclear energy-fueled data center in Pennsylvania and that it had plans to buy more in the future. 

However, the three companies’ strategies rely on somewhat different approaches to the problem of harnessing nuclear energy, and it remains unclear which, if any, will be successful. 

Energy demand 

Data centers, which concentrate thousands of powerful computers in one location, consume prodigious amounts of power, both to run the computers themselves and to operate the elaborate systems put in place to dissipate the large amount of heat they generate.  

A recent study by Goldman Sachs estimated that data centers currently consume between 1% and 2% of all available power generation. That percentage is expected to at least double by the end of the decade, even accounting for new power sources coming online. The study projected a 160% increase in data center power consumption by 2030. 

The U.S. Department of Energy has estimated that the largest data centers can consume more than 100 megawatts of electricity, or enough to power about 80,000 homes. 

Small, modular reactors 

Google’s plan is, in some ways, the most radical departure — both from the current structure of the energy grid and from traditional means of generating nuclear power. The internet search giant announced on Monday that it has partnered with Kairos Power to fund the construction of up to seven small-scale nuclear reactors that, across several locations, would combine to generate 500 megawatts of power. 

The small modular reactors (SMRs) are a new, and largely untested, technology. Unlike sprawling nuclear plants, SMRs are compact, requiring much less infrastructure to keep them operational and safe. 

“The smaller size and modular design can reduce construction timelines, allow deployment in more places, and make the final project delivery more predictable,” Google and Kairos said in a press release.  

The companies said they intend to have the first of the SMRs online by 2030, with the rest to follow by 2035. 

Great promise 

Sola Talabi, president of Pittsburgh Technical, a nuclear consulting firm, told VOA that SMR technology holds great promise for the future. He said that the plants’ small size will eliminate many of the safety concerns that larger reactors present. 

For example, some smaller reactors generate so much less heat than larger reactors that they can utilize “passive” cooling systems that are not susceptible to the kind of mechanical failures that caused disaster at Japan’s Fukushima plant in 2011 and the Soviet Union’s Chernobyl plant in 1986.  

Talabi, who is also an adjunct faculty member in nuclear engineering at the University of Pittsburgh and University of Michigan, said that SMRs’ modular nature will allow for rapid deployment and substantial cost savings as time goes on. 

“Pretty much every reactor that has been built [so far] has been built like it’s the first one,” he said. “But with these reactors, because we will be able to use the same processes, the same facilities, to produce them, we actually expect that we will be able to … achieve deployment scale relatively quickly.” 

Raising doubts 

Not all experts are convinced that SMRs are going to live up to expectations. 

Edwin Lyman, director of nuclear power safety for the Union of Concerned Scientists, told VOA that the Kairos reactors Google is hoping to install use a new technology that has never been tested under real-world conditions.

“At this point, it’s just hope without any real basis in experimental fact to believe that this is going to be a productive and reliable solution for the need to power data centers over the medium term,” he said. 

He pointed out that the large-scale deployment of new nuclear reactors will also result in the creation of a new source of nuclear waste, which the U.S. is still struggling to find a way to dispose of at scale.  

“I think what we’re seeing is really a bubble — a nuclear bubble — which I suspect is going to be deflated once these optimistic, hopeful agreements turn out to be much harder to execute,” Lyman said. 

Three Mile Island 

Microsoft and Amazon have plotted a more conventional path toward powering their data centers with nuclear energy. 

In its announcement last month, Microsoft revealed that it has reached an agreement with Constellation Energy to restart a mothballed nuclear reactor at Three Mile Island in Pennsylvania and to use the power it produces for its data operations. 

Three Mile Island is best known as the site of the worst nuclear disaster in U.S. history. In 1979, the site’s Unit 2 reactor suffered a malfunction that resulted in radioactive gases and iodine being released into the local environment.  

However, the facility’s Unit 1 reactor did not fail, and it operated safely for several decades. It was shut down in 2019, after cheap shale gas drove the price of energy down so far that it made further operations economically unfeasible. 

It is expected to cost $1.6 billion to bring the reactor back online, and Microsoft has agreed to fund that investment. It has also signed an agreement to purchase power from the facility for 20 years. The companies say they believe that they can bring the facility back online by 2028. 

Amazon’s plan, by contrast, does not require either new technology or the resurrection of an older nuclear facility. 

The data center that the company purchased from Talen Energy is located on the same site as the fully operational Susquehanna nuclear plant in Salem, Pennsylvania, and draws power directly from it. 

Amazon characterized the $650 million investment as part of a larger effort to reach net-zero carbon emissions by 2040. 

your ads here!

Report: Iran cyberattacks against Israel surge after Gaza war

Israel has become the top target of Iranian cyberattacks since the start of the Gaza war last year, while Tehran had focused primarily on the United States before the conflict, Microsoft said Tuesday.

“Following the outbreak of the Israel-Hamas war, Iran surged its cyber, influence, and cyber-enabled influence operations against Israel,” Microsoft said in an annual report.

“From October 7, 2023, to July 2024, nearly half of the Iranian operations Microsoft observed targeted Israeli companies,” said the Microsoft Digital Defense Report.

From July to October 2023, only 10 percent of Iranian cyberattacks targeted Israel, while 35 percent aimed at American entities and 20 percent at the United Arab Emirates, according to the US software giant.

Since the war started Iran has launched numerous social media operations with the aim of destabilizing Israel.

“Within two days of Hamas’ attack on Israel, Iran stood up several new influence operations,” Microsoft said.

An account called “Tears of War” impersonated Israeli activists critical of Prime Minister Benjamin Netanyahu’s handling of a crisis over scores of hostages taken by Hamas, according to the report.

An account called “KarMa”, created by an Iranian intelligence unit, claimed to represent Israelis calling for Netanyahu’s resignation. 

Iran also began impersonating partners after the war started, Microsoft said.

Iranian services created a Telegram account using the logo of the military wing of Hamas to spread false messages about the hostages in Gaza and threaten Israelis, Microsoft said. It was not clear if Iran acted with Hamas’s consent, it added.

“Iranian groups also expanded their cyber-enabled influence operations beyond Israel, with a focus on undermining international political, military, and economic support for Israel’s military operations,” the report said.

The Hamas terror attack on October 7, 2023, resulted in the deaths of 1,206 people, mostly civilians, according to an AFP tally of official Israeli figures, including hostages killed in captivity.  

Israel’s retaliatory military campaign in Gaza has killed 42,289 people, the majority civilians, according to the health ministry in the Hamas-run territory. The U.N. has described the figures as reliable. 

your ads here!

Africa’s farming future could include more digital solutions

NAIROBI, KENYA — More than 400 delegates and organizations working in Africa’s farming sector are in Nairobi, Kenya, this week to discuss how digital agriculture can improve the lives of farmers and the continent’s food system.

Tech innovators discussed the need for increased funding, especially for women.

In past decades, African farmers have struggled to produce enough food to feed the continent.

DigiCow is one of the tech companies at the conference that says it has answers to the problem. The Kenya-based company says it provides farmers with digital recordkeeping, education via audio on an app, and access to financing and marketing.

Maureen Saitoti, DigiCow’s brand manager, said the platform has improved the lives of at least half a million farmers.

“Other than access to finance, it is also able to offer access to the market because a farmer is able to predict the harvest they are anticipating and begin conversations with buyers who have also been on board on the platform,” she said. “So, this has proven to provide a wholesome integration of the ecosystem, supporting small-scale farmers.”

Integrating digital systems into food production helps farmers gain access to seed, fertilizer and loans, and helps prevent pests and diseases on farms, organizers said.

Innovation in agriculture technology is seen as helping reach marginalized groups, including women.

Sieka Gatabaki, program director for Mercy Corps AgriFin, which is in 40 countries working with digital tool providers to increase the productivity and incomes of small-scale farmers, said his organization stresses education and practical information.

“We also focus on agronomic advice that gives the farmers the right kind of skills and knowledge to execute on their farms, as well as precision information such as weather that enables them to make the right decisions [about] how they grow and when they should grow and what they should grow in different geomatic climates,” Gatabaki said.

“Then we definitely expect that those farmers will increase their productivity and income.”

According to the State of AgTech Investment Report 2024, farming attracted $1.6 billion in funding in the past decade. But experts say the current funding is not enough to meet the sector’s growing demands.

David Saunder, director of strategy and growth at Briter Bridges, says funding systems have evolved to cope with problems faced by farmers and the food industry.

“Funding follows those businesses, those startups, that can viably grow and scale their businesses, and that’s what we are trying to do with AgTech to increase the data and information on those,” he said.

During the meeting, tech developers, experts and donors will also discuss how artificial intelligence and alternative data could be used to improve productivity.

your ads here!

Microsoft: Cybercriminals increasingly help Russia, China, Iran target US, allies

WASHINGTON — Russia, China and Iran are increasingly relying on criminal networks to lead cyberespionage and hacking operations against adversaries such as the United States, according to a report on digital threats published Tuesday by Microsoft.

The growing collaboration between authoritarian governments and criminal hackers has alarmed national security officials and cybersecurity experts. They say it represents the increasingly blurred lines between actions directed by Beijing or the Kremlin aimed at undermining rivals and the illicit activities of groups typically more interested in financial gain.

In one example, Microsoft’s analysts found that a criminal hacking group with links to Iran infiltrated an Israeli dating site and then tried to sell or ransom the personal information it obtained. Microsoft concluded the hackers had two motives: to embarrass Israelis and make money.

In another, investigators identified a Russian criminal network that infiltrated more than 50 electronic devices used by the Ukrainian military in June, apparently seeking access and information that could aid Russia’s invasion of Ukraine. There was no obvious financial motive for the group, aside from any payment they may have received from Russia.

Marriage of convenience

For nations such as Russia, China, Iran and North Korea, teaming up with cybercriminals offers a marriage of convenience with benefits for both sides. Governments can boost the volume and effectiveness of cyber activities without added cost. For the criminals, it offers new avenues for profit and the promise of government protection.

“We’re seeing in each of these countries this trend toward combining nation-state and cybercriminal activities,” said Tom Burt, Microsoft’s vice president of customer security and trust.

So far there is no evidence suggesting that Russia, China and Iran are sharing resources with each other or working with the same criminal networks, Burt said. But he said the growing use of private cyber “mercenaries” shows how far America’s adversaries will go to weaponize the internet.

Microsoft’s report analyzed cyber threats between July 2023 and June 2024, looking at how criminals and foreign nations use hacking, spear phishing, malware and other techniques to gain access and control over a target’s system. The company says its customers face more than 600 million such incidents every day.

Russia focused much of its cyber operations on Ukraine, trying to enter military and government systems and spreading disinformation designed to undermine support for the war among its allies.

Ukraine has responded with its own cyber efforts, including one last week that knocked some Russian state media outlets offline.

US elections targeted

Networks tied to Russia, China and Iran have also targeted American voters, using fake websites and social media accounts to spread false and misleading claims about the 2024 election. Analysts at Microsoft agree with the assessment of U.S. intelligence officials who say Russia is targeting the campaign of Vice President Kamala Harris, while Iran is working to oppose former President Donald Trump.

Iran has also hacked into Trump’s campaign and sought, unsuccessfully, to interest Democrats in the material. Federal officials have also accused Iran of covertly supporting American protests over the war in Gaza.

Russia and Iran will likely accelerate the pace of their cyber operations targeting the U.S. as election day approaches, Burt said.

China, meanwhile, has largely stayed out of the presidential race, focusing its disinformation on down-ballot races for Congress or state and local office. Microsoft found networks tied to Beijing also continue to target Taiwan and other countries in the region.

Denials from all parties

In response, a spokesperson for the Chinese Embassy in Washington said allegations that China partners with cybercriminals are groundless and accused the U.S. of spreading its own “disinformation about the so-called Chinese hacking threats.”

In a statement, spokesperson Liu Pengyu said that “our position is consistent and clear. China firmly opposes and combats cyberattacks and cybertheft in all forms.”

Russia and Iran have also rejected accusations that they’re using cyber operations to target Americans. Messages left with representatives of those three nations and North Korea were not returned Monday.

Efforts to disrupt foreign disinformation and cyber capabilities have escalated along with the threat, but the anonymous, porous nature of the internet sometimes undercuts the effectiveness of the response.

Federal authorities recently announced plans to seize hundreds of website domains used by Russia to spread election disinformation and to support efforts to hack former U.S. military and intelligence figures. But investigators at the Atlantic Council’s Digital Forensic Research Lab found that sites seized by the government can easily and quickly be replaced.

Within one day of the Department of Justice seizing several domains in September, for example, researchers spotted 12 new websites created to take their place. One month later, they continue to operate.

your ads here!

Britain to allow drones to inspect power lines, wind turbines

london — Britain’s aviation regulator said Tuesday that it would allow drones to inspect infrastructure such as power lines and wind turbines, a move the authority has described as a significant milestone. 

The U.K.’s Civil Aviation Authority (CAA) had said earlier this year that it wanted to permit more drone flying for such activities as well as for deliveries and emergency services. It selected in August six projects to test it. 

Drones inspecting infrastructure will now be able to fly distances beyond remote flyers’ ability to see them. 

“While some drones have been flying beyond visual line of sight in the U.K. for several years, these flights are primarily trials under strict restrictions,” the CAA said. 

Under the CAA’s new policy, some drones will be able to remain at low heights close to infrastructure where there is little or no potential for any other aircraft to operate. It will also reduce costs, the CAA said. 

Drones will inspect power lines for damage, carry out maintenance checks of wind turbines and even be used as “flying guard dogs” for site security. 

The CAA will work with several operators to test and evaluate the policy, which according to the regulator’s director, Sophie O’Sullivan, “paves the way for new ways drones will improve everyday life.” 

your ads here!

Paris Motor Show opens during brewing EV trade war between EU, China

Paris — The Auto manufacturers competing to persuade drivers to go electric are rolling out cheaper, more tech-rich models at the Paris Motor Show, targeting everyone from luxury clients to students yet to receive their driving licenses. 

The biennial show has long been a major industry showcase, tracing its history to 1898. 

Chinese manufacturers are attending in force, despite European Union threats to punitively tax imports of their electric vehicles in a brewing trade war with Beijing. Long-established European manufacturers are fighting back with new efforts to win consumers who have balked at high-priced EVs. 

Here’s a look at the show’s opening day on Monday. 

More new models from China 

Chinese EV startups Leapmotor and XPeng showcased models they said incorporate artificial intelligence technology. 

Leapmotor, founded in 2015, unveiled a compact electric-powered SUV, the B10. It will be manufactured in Poland for European buyers, said Leapmotor’s head of product planning, Zhong Tianyue. Leapmotor didn’t announce a price for the B10 that will launch next year. 

Leapmotor also said a smaller electric commuter car it showcased in Paris, the T03, will retail from a competitive 18,900 euros ($20,620). Those sold in France will be imported from China but assembled in Poland, Zhong said. 

Leapmotor also announced a starting price of 36,400 euros ($39,700) in Europe for its larger family car, the C10. 

Sales outside of China are through a joint venture with Stellantis, the world’s fourth largest carmaker. Leapmotor said European sales started in September. 

Xpeng braces for tariff hit 

Attending the Paris show for the first time, the decade-old Chinese EV manufacturer XPeng unveiled a sleek sedan, the P7+. 

CEO He Xiaopeng said XPeng aims to deliver in Europe from next year. Intended European prices for the P7+ weren’t given, but the CEO said they will start in China at 209,800 yuan, the equivalent of 27,100 euros, or $29,600. 

XPeng’s president, Brian Gu, said the EU’s threatened import duties could complicate the company’s expansion plans if Brussels and Beijing don’t find an amicable solution to their trade dispute before an end-of-October deadline. 

Brussels says subsidies help Chinese companies to unfairly undercut EU industry prices, with Chinese-built electric cars jumping from 3.9% of the EV market in 2020 to 25% by September 2023. 

“The tariff will put a lot of pressure on our business model. It’s a direct hit on our margin, which is already not very high,” Gu said. 

Vehicles for young teens 

Manufacturers of small electric vehicles that can be driven in Europe without a license are finding a growing market among teens as young as 14 and their parents who, for safety reasons, prefer that they zip around on four wheels than on motorbikes. 

Several manufacturers of the two-seaters are showcasing in Paris, including France’s Citroen. The starting price for its Ami, or “Friend,” is just under 8,000 euros ($8,720). Launched in France in 2020, the plastic-shelled vehicle is now also sold in other European markets and in Turkey, Morocco and South America. 

“It’s not a car. It’s a mobility object,” said Citroen’s product chief for the Ami, Alain Le Gouguec. 

European legislation allows teenagers without a full license to drive the Ami and similar buggies from age 14 after an eight-hour training course. They’re limited to a top speed of 45 kilometers per hour (28 mph). 

The vehicles are also finding markets among adults who lost their license for driving infractions or who never got a full license, and outside cities in areas with poor transport. 

Renault subsidiary Mobilize said that even in winter’s energy-sapping cold its two-seater, no-license, plastic-shelled Duo can go 100 kilometers (over 60 miles) between charges. A phone app acts as its door and ignition key. 

Another French manufacturer, Ligier, sells its no-license two-seaters in both diesel and electric versions.

your ads here!

Online hate against South Asian Americans rises steadily, report says

WASHINGTON — Online hate against Americans of South Asian ancestry has risen steadily in 2023 and 2024 with the rise of politicians from that community to prominence, according to a report released Wednesday by nonprofit group Stop AAPI Hate.

Why it’s important

Democratic presidential candidate and Vice President Kamala Harris is of Indian descent, as are former Republican presidential candidates Nikki Haley and Vivek Ramaswamy. Republican vice presidential candidate JD Vance’s wife, Usha Vance, is also Indian American.

Harris faces Republican former President Donald Trump in the 2024 U.S. elections.

There has been a steady rise in anti-Asian hate in extremist online spaces from January 2023 to August 2024, the report said.

The nonprofit group blamed the rise on a “toxic political climate in which a growing number of leaders and far-right extremist voices continue to spew bigoted political rhetoric and disinformation.”

Key quotes

“Online threats of violence towards Asian communities reached their highest levels in August 2024, after Usha Vance appeared at the Republican National Convention and Kamala Harris was declared a presidential nominee at the Democratic National Convention,” Stop AAPI Hate said.

“The growing prevalence of anti-South Asian online hate … in 2023 and 2024 tracks with the rise in South Asian political representation this election cycle,” it added.

By the numbers

Among Asian American subgroups, South Asian communities were targeted with the highest volume of anti-Asian online hostility, with 60% of slurs directed at them in that period, according to the report.

Anti-South Asian slurs in extremist online spaces doubled last year, from about 23,000 to more than 46,000, and peaked in August 2024.

There are nearly 5.4 million people of South Asian descent living in the United States, comprising of individuals with ancestry from nations including India, Bangladesh, Bhutan, Nepal, Pakistan and Sri Lanka.

your ads here!