Coeco

NASA Spacecraft Discovers Tiny Moon Around Asteroid

The little asteroid visited by NASA’s Lucy spacecraft this week had a big surprise for scientists.

It turns out that the asteroid Dinkinesh has a dinky sidekick — a mini moon.

The discovery was made during Wednesday’s flyby of Dinkinesh, 480 million kilometers (300 million miles) away in the main asteroid belt beyond Mars. The spacecraft snapped a picture of the pair when it was about 435 kilometers (270 miles) out.

In data and images beamed back to Earth, the spacecraft confirmed that Dinkinesh is barely a half-mile (790 meters) across. Its closely circling moon is a mere one-tenth-of-a-mile (220 meters) in size.

NASA sent Lucy past Dinkinesh as a rehearsal for the bigger, more mysterious asteroids out near Jupiter. Launched in 2021, the spacecraft will reach the first of these so-called Trojan asteroids in 2027 and explore them for at least six years. The original target list of seven asteroids now stands at 11.

Dinkinesh means “you are marvelous” in the Amharic language of Ethiopia. It’s also the Amharic name for Lucy, the 3.2 million year old remains of a human ancestor found in Ethiopia in the 1970s, for which the spacecraft is named.

“Dinkinesh really did live up to its name; this is marvelous,” Southwest Research Institute’s Hal Levison, the lead scientist, said in a statement.

your ads here!

FTX Founder Convicted of Defrauding Cryptocurrency Customers

FTX founder Sam Bankman-Fried’s spectacular rise and fall in the cryptocurrency industry — a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president — hit rock bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion.

After the monthlong trial, jurors rejected Bankman-Fried’s claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world’s second-largest crypto exchange, collapsed into bankruptcy a year ago.

“His crimes caught up to him. His crimes have been exposed,” Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers’ accounts into his “personal piggy bank” as up to $14 billion disappeared.

She urged jurors to reject Bankman-Fried’s insistence when he testified over three days that he never committed fraud or plotted to steal from customers, investors and lenders and didn’t realize his companies were at least $10 billion in debt until October 2022.

Bankman-Fried was required to stand and face the jury as guilty verdicts on all seven counts were read. He kept his hands clasped tightly in front of him. When he sat down after the reading, he kept his head tilted down for several minutes.

After the judge set a sentencing date of March 28, Bankman-Fried’s parents moved to the front row behind him. His father put his arm around his wife. As Bankman-Fried was led out of the courtroom, he looked back and nodded toward his mother, who nodded back and then became emotional, wiping her hand across her face after he left the room.

U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried “perpetrated one of the biggest financial frauds in American history, a multibillion-dollar scheme designed to make him the king of crypto.”

“But here’s the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time, and we have no patience for it,” he said.

Bankman-Fried’s attorney, Mark Cohen, said in a statement they “respect the jury’s decision. But we are very disappointed with the result.”

“Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him,” Cohen said.

The trial attracted intense interest with its focus on fraud on a scale not seen since the 2009 prosecution of Bernard Madoff, whose Ponzi scheme over decades cheated thousands of investors out of about $20 billion. Madoff pleaded guilty and was sentenced to 150 years in prison, where he died in 2021.

The prosecution of Bankman-Fried, 31, put a spotlight on the emerging industry of cryptocurrency and a group of young executives in their 20s who lived together in a $30 million luxury apartment in the Bahamas as they dreamed of becoming the most powerful player in a new financial field.

Prosecutors made sure jurors knew that the defendant they saw in court with short hair and a suit was also the man with big messy hair and shorts that became his trademark appearance after he started his cryptocurrency hedge fund, Alameda Research, in 2017 and FTX, his cryptocurrency exchange, two years later.

They showed the jury pictures of Bankman-Fried sleeping on a private jet, sitting with a deck of cards and mingling at the Super Bowl with celebrities including the singer Katy Perry. Assistant U.S. Attorney Nicolas Roos called Bankman-Fried someone who liked “celebrity chasing.”

In a closing argument, defense lawyer Mark Cohen said prosecutors were trying to turn “Sam into some sort of villain, some sort of monster.”

“It’s both wrong and unfair, and I hope and believe that you have seen that it’s simply not true,” he said. “According to the government, everything Sam ever touched and said was fraudulent.”

The government relied heavily on the testimony of three former members of Bankman-Fried’s inner circle, his top executives including his former girlfriend, Caroline Ellison, to explain how Bankman-Fried used Alameda Research to siphon billions of dollars from customer accounts at FTX.

With that money, prosecutors said, the Massachusetts Institute of Technology graduate gained influence and power through investments, contributions, tens of millions of dollars in political contributions, congressional testimony and a publicity campaign that enlisted celebrities like comedian Larry David and football quarterback Tom Brady.

Ellison, 28, testified that Bankman-Fried directed her while she was chief executive of Alameda Research to commit fraud as he pursued ambitions to lead huge companies, spend money influentially and run for U.S. president someday. She said he thought he had a 5% chance to be U.S. president someday.

Becoming tearful as she described the collapse of the cryptocurrency empire last November, Ellison said the revelations that caused customers collectively to demand their money back, exposing the fraud, brought a “relief that I didn’t have to lie anymore.”

FTX cofounder Gary Wang, who was FTX’s chief technology officer, revealed in his testimony that Bankman-Fried directed him to insert code into FTX’s operations so that Alameda Research could make unlimited withdrawals from FTX and have a credit line of up to $65 billion. Wang said the money came from customers.

Nishad Singh, the former head of engineering at FTX, testified that he felt “blindsided and horrified” at the result of the actions of a man he once admired when he saw the extent of the fraud as the collapse last November left him suicidal.

Ellison, Wang and Singh all pleaded guilty to fraud charges and testified against Bankman-Fried in the hopes of leniency at sentencing.

Bankman-Fried was arrested in the Bahamas in December and extradited to the United States, where he was freed on a $250 million personal recognizance bond with electronic monitoring and a requirement that he remain at the home of his parents in Palo Alto, California.

His communications, including hundreds of phone calls with journalists and internet influencers, along with emails and texts, eventually got him into trouble when the judge concluded he was trying to influence prospective trial witnesses and ordered him jailed in August.

During the trial, prosecutors used Bankman-Fried’s public statements, online announcements and his congressional testimony against him, showing how the entrepreneur repeatedly promised customers that their deposits were safe and secure as late as last Nov. 7 when he tweeted, “FTX is fine. Assets are fine” as customers furiously tried to withdraw their money. He deleted the tweet the next day. FTX filed for bankruptcy four days later.

In his closing, Roos mocked Bankman-Fried’s testimony, saying that under questioning from his lawyer, the defendant’s words were “smooth, like it had been rehearsed a bunch of times?”

But under cross examination, “he was a different person,” the prosecutor said. “Suddenly on cross-examination he couldn’t remember a single detail about his company or what he said publicly. It was uncomfortable to hear. He never said he couldn’t recall during his direct examination, but it happened over 140 times during his cross-examination.”

Former federal prosecutors said the quick verdict — after only half a day of deliberation — showed how well the government tried the case.

“The government tried the case as we expected,” said Joshua A. Naftalis, a partner at Pallas Partners LLP and a former Manhattan prosecutor. “It was a massive fraud, but that doesn’t mean it had to be a complicated fraud, and I think the jury understood that argument.”

your ads here!

World Leaders Agree on Artificial Intelligence Risks

World leaders have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence, at a U.K.-hosted safety conference.

The inaugural AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday, with senior officials from 28 nations, including the United States and China, agreeing to work toward a “shared agreement and responsibility” about AI risks. Plans are in place for further meetings later this year in South Korea and France.

Leaders, including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General Antonio Guterres, discussed each of their individual testing models to ensure the safe growth of AI.

Thursday’s session included focused conversations among what the U.K. called a small group of countries “with shared values.” The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia.

Some leaders, including Sunak, said immediate sweeping regulation is not the way forward, reflecting the view of some AI companies that fear excessive regulation could thwart the technology before it can reach its full potential.

At at a press conference on Thursday, Sunak announced another landmark agreement by countries pledging to “work together on testing the safety of new AI models before they are released.”

The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks.

The summit will conclude with a conversation between Sunak and billionaire Elon Musk. Musk on Wednesday told fellow attendees that legislation on AI could pose risks, and that the best steps forward would be for governments to work to understand AI fully to harness the technology for its positive uses, including uncovering problems that can be brought to the attention of lawmakers.

Some information in this report was taken from The Associated Press and Reuters.

your ads here!

India Probing Phone Hacking Complaints by Opposition Politicians, Minister Says

India’s cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said.

Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that “Apple confirmed it has received the notice for investigation.”

A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized.

There was no immediate comment from Apple about the investigation.

This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi’s government of trying to hack into opposition politicians’ mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: “Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID.”

A senior minister from Modi’s government also said he had received the same notification on his phone.

Apple said it did not attribute the threat notifications to “any specific state-sponsored attacker,” adding that “it’s possible that some Apple threat notifications may be false alarms, or that some attacks are not detected.”

In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi.

The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.

your ads here!

US Pushes for Global Protections for Threats Posed by AI

U.S. Vice President Kamala Harris says leaders have “a moral, ethical and societal duty” to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.

your ads here!

British PM Rishi Sunak Hosts AI Summit in London

British Prime Minister Rishi Sunak is bringing together government officials, academics and tech moguls from around the world for a two-day AI Safety Summit Wednesday and Thursday at Bletchley Park, the once top-secret headquarters of World War II-era codebreakers.

The inaugural symposium is a moment for key players in global affairs to spar over the future of frontier AI, specifically whether the technology represents a danger to humanity and what can be done to mitigate that potential threat. Frontier AI is a broad term for general-purpose systems that can operate on the very cutting-edge of today’s software.

The 100-person guest list includes Elon Musk, the richest man on earth; Sam Altman, the brain behind ChatGPT; and a host of prominent professors and researchers.

World leaders are among those in attendance, including U.S. Vice President Kamala Harris; China’s Vice Minister of Science and Technology Wu Zhaohui; U.N. Secretary-General Antonio Guterres; and European Commission President Ursula von der Leyen.

China, a frontrunner in AI development, has a key role in the forum as Sunak attempts to position himself as a middleman between East and West. The decision to invite China was met with mixed reactions at home in the British Parliament and abroad.

Jane Hartley, the U.S. ambassador to the United Kingdom, made clear that the White House had no part in bringing China to the table.

“This is the U.K. invitation, this is not the U.S.,” Hartley told Reuters. “When the U.K. government was talking to us, we said it’s your summit. So, if you want to invite them, invite them.”

Last week, top officials with the Five Eyes, an intelligence alliance that includes the U.K. and the U.S., banded together for an unprecedented public appearance in which they accused China of stealing tech secrets from Western nations on a massive scale.

As concerns over China’s influence on Big Tech mount, U.S. President Joe Biden signed into law an executive order on Monday giving the federal government greater regulatory power over AI where it may endanger national security, public health or the economy.

On Wednesday, Kamala Harris delivered a speech at the summit outlining her administration’s efforts in curbing the risks of generative AI. Harris announced the creation of The United States AI Safety Institute, a new task force on AI to recommend guidelines and identify risk factors.

Harris also urged other nations to sign on to a U.S.-sponsored pledge for the “responsible and ethical” use of AI in the military.

Some information for this report was provided by Reuters.

your ads here!

Electric Vehicles Hit the Roads in Malawi

Drivers in Malawi are getting an opportunity to purchase electric vehicles through a local startup company. The handful of buyers so far say they no longer have to struggle daily to get fuel at pump stations. Lameck Masina reports from Blantyre.

your ads here!

UK Kicks Off World’s First AI Safety Summit

The world’s first major summit on artificial intelligence (AI) safety opens in Britain Wednesday, with political and tech leaders set to discuss possible responses to the society-changing technology.

British Prime Minister Rishi Sunak, U.S. Vice President Kamala Harris, EU chief Ursula von der Leyen and U.N. Secretary-General Antonio Guterres will all attend the two-day conference, which will focus on growing fears about the implications of so-called frontier AI.

The release of the latest models has offered a glimpse into the potential of AI, but has also prompted concerns around issues ranging from job losses to cyber-attacks and the control that humans actually have over the systems.

Sunak, whose government initiated the gathering, said in a speech last week that his “ultimate goal” was “to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe before they are released.

“We will push hard to agree the first ever international statement about the nature of these risks,” he added, drawing comparisons to the approach taken to climate change.

But London has reportedly had to scale back its ambitions around ideas such as launching a new regulatory body amid a perceived lack of enthusiasm.

Italian Prime Minister Giorgia Meloni is one of the only world leaders, and only one from the G7, attending the conference.

Elon Musk is due to appear, but it is not clear yet whether he will be physically at the summit in Bletchley Park, north of London, where top British codebreakers cracked Nazi Germany’s “Enigma” code.

‘Talking shop’

While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked.

In his speech, Sunak stressed the need for countries to develop “a shared understanding of the risks that we face.”

But lawyer and investigator Cori Crider, a campaigner for “fair” technology, warned that the summit could be “a bit of a talking shop.

“If he were serious about safety, Rishi Sunak needed to roll deep and bring all of the U.K. majors and regulators in tow and he hasn’t,” she told a press conference in San Francisco.

“Where is the labor regulator looking at whether jobs are being made unsafe or redundant? Where’s the data protection regulator?” she asked.

Having faced criticism for only looking at the risks of AI, the U.K. Wednesday pledged $46 million to fund AI projects around the world, starting in Africa.

Ahead of the meeting, the G7 powers agreed on Monday on a non-binding “code of conduct” for companies developing the most advanced AI systems.

The White House announced its own plan to set safety standards for the deployment of AI that will require companies to submit certain systems to government review.

 

And in Rome, ministers from Italy, Germany and France called for an “innovation-friendly approach” to regulating AI in Europe, as they urged more investment to challenge the U.S. and China.

China will be present, but it is unclear at what level.

News website Politico reported London invited President Xi Jinping, to signify its eagerness for a senior representative.

Beijing’s invitation has raised eyebrows amid heightened tensions with Western nations and accusations of technological espionage. 

 

your ads here!

Biden Signs Sweeping Executive Order on AI Oversight

President Joe Biden on Monday signed a wide-ranging executive order on artificial intelligence, covering topics as varied as national security, consumer privacy, civil rights and commercial competition. The administration heralded the order as taking “vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI.”

The order directs departments and agencies across the U.S. federal government to develop policies aimed at placing guardrails alongside an industry that is developing newer and more powerful systems at a pace rate that has many concerned it will outstrip effective regulation.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said during a signing ceremony at the White House. The order, he added, is “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” 

‘Red teaming’ for security 

One of the marquee requirements of the new order is that it will require companies developing advanced artificial intelligence systems to conduct rigorous testing of their products to ensure that bad actors cannot use them for nefarious purposes. The process, known as red teaming, will assess, among other things, “AI systems threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.” 

The National Institute of Standards and Technology will set the standards for such testing, and AI companies will be required to report their results to the federal government prior to releasing new products to the public. The Departments of Homeland Security and Energy will be closely involved in the assessment of threats to vital infrastructure. 

To counter the threat that AI will enable the creation and dissemination of false and misleading information, including computer-generated images and “deep fake” videos, the Commerce Department will develop guidance for the creation of standards that will allow computer-generated content to be easily identified, a process commonly called “watermarking.” 

The order directs the White House chief of staff and the National Security Council to develop a set of guidelines for the responsible and ethical use of AI systems by the U.S. national defense and intelligence agencies.

Privacy and civil rights

The order proposes a number of steps meant to increase Americans’ privacy protections when AI systems access information about them. That includes supporting the development of privacy-protecting technologies such as cryptography and creating rules for how government agencies handle data containing citizens’ personally identifiable information.

However, the order also notes that the United States is currently in need of legislation that codifies the kinds of data privacy protections that Americans are entitled to. Currently, the U.S. lags far behind Europe in the development of such rules, and the order calls on Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids.”

The order recognizes that the algorithms that enable AI to process information and answer users’ questions can themselves be biased in ways that disadvantage members of minority groups and others often subject to discrimination. It therefore calls for the creation of rules and best practices addressing the use of AI in a variety of areas, including the criminal justice system, health care system and housing market.

The order covers several other areas, promising action on protecting Americans whose jobs may be affected by the adoption of AI technology; maintaining the United States’ market leadership in the creation of AI systems; and assuring that the federal government develops and follows rules for its own adoption of AI systems.

Open questions

Experts say that despite the broad sweep of the executive order, much remains unclear about how the Biden administration will approach the regulations of AI in practice.

Benjamin Boudreaux, a policy researcher at the RAND Corporation, told VOA that while it is clear the administration is “trying to really wrap their arms around the full suite of AI challenges and risks,” much work remains to be done.

“The devil is in the details here about what funding and resources go to executive branch agencies to actually enact many of these recommendations, and just what models a lot of the norms and recommendations suggested here will apply to,” Boudreaux said.

International leadership

Looking internationally, the order says the administration will work to take the lead in developing “an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”

James A. Lewis, senior vice president and director of the strategic technologies program at the Center for Strategic and International Studies, told VOA that the executive order does a good job of laying out where the U.S. stands on many important issues related to the global development of AI.

“It hits all the right issues,” Lewis said. “It’s not groundbreaking in a lot of places, but it puts down the marker for companies and other countries as to how the U.S. is going to approach AI.”

That’s important, Lewis said, because the U.S. is likely to play a leading role in the development of the international rules and norms that grow up around the technology.

“Like it or not — and certainly some countries don’t like it — we are the leaders in AI,” Lewis said. “There’s a benefit to being the place where the technology is made when it comes to making the rules, and the U.S. can take advantage of that.”

‘Fighting the last war’ 

Not all experts are certain the Biden administration’s focus is on the real threats that AI might present to consumers and citizens. 

Louis Rosenberg, a 30-year veteran of AI development and the CEO of American tech firm Unanimous AI, told VOA he is concerned the administration may be “fighting the last war.”

“I think it’s great that they’re making a bold statement that this is a very important issue,” Rosenberg said. “It definitely shows that the administration is taking it seriously and that they want to protect the public from AI.”

However, he said, when it comes to consumer protection, the administration seems focused on how AI might be used to advance existing threats to consumers, like fake images and videos and convincing misinformation — things that already exist today.

“When it comes to regulating technology, the government has a track record of underestimating what’s new about the technology,” he said.

Rosenberg said he is more concerned about the new ways in which AI might be used to influence people. For example, he noted that AI systems are being built to interact with people conversationally.

“Very soon, we’re not going to be typing in requests into Google. We’re going to be talking to an interactive AI bot,” Rosenberg said. “AI systems are going to be really effective at persuading, manipulating, potentially even coercing people conversationally on behalf of whomever is directing that AI. This is the new and different threat that did not exist before AI.” 

your ads here!

Musk Pulls Plug on Paying for X Factchecks

Elon Musk has said that corrections to posts on X would no longer be eligible for payment as the social network comes under mounting criticism as becoming a conduit for misinformation.

In the year since taking over Twitter, now rebranded as X, Musk has gutted content moderation, restored accounts of previously banned extremists, and allowed users to purchase account verification, helping them profit from viral — but often inaccurate — posts.

Musk has instead promoted Community Notes, in which X users police the platform, as a tool to combat misinformation. 

But on Sunday, Musk tweeted a modification in how Community Notes works.

“Making a slight change to creator monetization: Any posts that are corrected by @CommunityNotes become ineligible for revenue share,” he wrote.  

“The idea is to maximize the incentive for accuracy over sensationalism,” he added. 

X pays content creators whose work generates lots of views a share of advertising revenue. 

Musk warned against using corrections to make X users ineligible for receiving payouts.

“Worth ‘noting’ that any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source,” he posted.

Musk’s announcement follows the unveiling Friday of a $16-a-month subscription plan that users who pay more get the biggest boost for their replies. Earlier this year it unveiled an $8-a-month plan to get a “verified” account.

A recent study by the disinformation monitoring group NewsGuard found that verified, paying subscribers were the big spreaders of misinformation about the Israel-Hamas war. 

“Nearly three-fourths of the most viral posts on X advancing misinformation about the Israel-Hamas War are being pushed by ‘verified’ X accounts,” the group said.

It said the 250 most-engaged posts that promoted one of 10 prominent false or unsubstantiated narratives related to the war were viewed more than 100 million times globally in just one week. 

NewsGuard said 186 of those posts were made from verified accounts and only 79 had been fact-checked by Community Notes. 

Verified accounts “turned out to be a boon for bad actors sharing misinformation,” said NewsGuard.

“For less than the cost of a movie ticket, they have gained the added credibility associated with the once-prestigious blue checkmark and enabling them to reach a larger audience on the platform,” it said.

While the organization said it found misinformation spreading widely on other social media platforms such as Facebook, Instagram, TikTok and Telegram, it added that it found false narratives about the Israel-Hamas war tend to go viral on X before spreading elsewhere. 

your ads here!

Musk Says Starlink to Provide Connectivity in Gaza

Elon Musk said on Saturday that SpaceX’s Starlink will support communication links in Gaza with “internationally recognized aid organizations.”

A telephone and internet blackout isolated people in the Gaza Strip from the world and from each other on Saturday, with calls to loved ones, ambulances or colleagues elsewhere all but impossible as Israel widened its air and ground assault.

International humanitarian organizations said the blackout, which began on Friday evening, was worsening an already desperate situation by impeding lifesaving operations and preventing them from contacting their staff on the ground.

Following Russia’s February 2022 invasion of Ukraine, Starlink satellites were reported to have been critical to maintaining internet connectivity in some areas despite attempted Russian jamming.

Since then, Musk has said he declined to extend coverage over Russian-occupied Crimea, refusing to allow his satellites to be used for Ukrainian attacks on Russian forces there.

your ads here!

Inside a Drone Factory: How It Helps Ukraine’s Defense Efforts

Brinc Drones is one of the U.S. companies shipping hundreds of drones to Ukraine. These drones are designed to help first responders survey the impacted areas of Russian shelling and find survivors. Adriy Borys visited the Brink manufacturing facility. Anna Rice narrates his story. Camera — Dmitriy Savchuk.

your ads here!

Zara Owner Inditex to Buy Recycled Polyester From US Start-Up

Zara-owner Inditex, the world’s biggest clothing retailer, has agreed to buy recycled polyester from a U.S. start-up as it aims for 25% of its fibers to come from “next-generation” materials by 2030.

As fast-fashion retailers face pressure to reduce waste and use recycled fabrics, Inditex is spending more than $74 million to secure supply from Los Angeles-based Ambercycle of its recycled polyester made from textile waste.

Polyester, a product of the petroleum industry, is widely used in sportswear as it is quick-drying and durable.

Under the offtake deal, Inditex will buy 70% of Ambercycle’s production of recycled polyester, which is sold under the brand cycora, over three years, Inditex CEO Oscar Garcia Maceiras said at a business event in Zaragoza, Spain.

Garcia Maceiras said Inditex is also working with other companies and start-ups in its innovation hub, a unit looking for ways to curb the environmental impact of its products.

“The sustainable transformation of Inditex … is not possible without the collaboration of the different stakeholders,” he said.

The Inditex investment will help Ambercycle fund its first commercial-scale textile recycling factory. Production of cycora at the plant is expected to begin around 2025, and the material will be used in Inditex products over the following three years.

Zara Athleticz, a sub-brand of sportswear for men, launched a collection on Wednesday of “technical pieces” containing up to 50% cycora. Inditex said the collection would be available from Zara.com.

Some apparel brands seeking to reduce their reliance on virgin polyester have switched to recycled polyester derived from plastic bottles, but that practice has come under criticism as it has created more demand for used plastic bottles, pushing up prices.

Textile-to-textile polyester recycling is in its infancy, though, and will take time to reach the scale required by global fashion brands.

“We want to drive innovation to scale-up new solutions, processes and materials to achieve textile-to-textile recycling,” Inditex’s chief sustainability officer Javier Losada said in a statement.

The Ambercycle deal marks the latest in a series of investments made by Inditex into textile recycling start-ups.

Last year it signed a $104 million, three-year deal to buy 30% of the recycled fiber produced by Finland’s Infinited Fiber Co., and also invested in Circ, another U.S. firm focused on textile-to-textile recycling.

In Spain, Inditex has joined forces with rivals, including H&M and Mango, in an association to manage clothing waste, as the industry prepares for EU legislation requiring member states to separately collect textile waste beginning January 2025.

your ads here!

33 US States Sue Meta, Accusing Platform of Harming Children

Thirty-three U.S. states are suing Meta Platforms Inc., accusing it of damaging young people’s mental health through the addictive nature of their social media platforms.

The suit filed Tuesday in federal court in Oakland, California, alleges Meta knowingly installed addictive features on its social media platforms, Instagram and Facebook, and has collected data on children younger than 13, without their parents’ consent, violating federal law.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint says.

The filing comes after Meta’s own research in 2021 found that the company was aware of the damage Instagram can do to teenagers, especially girls.

In Meta’s 2021 study, 13.5% of teen girls said Instagram makes thoughts of suicide worse and 17% of teen girls said it makes eating disorders worse.

Meta responded to the lawsuit by saying it has “already introduced over 30 tools to support teens and their families.”

“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the company added.

Meta is one of many social media companies facing criticism and legal action, with lawsuits also filed against ByteDance’s TikTok and Google’s YouTube.

Measures to protect children on social media exist, but they are easily circumvented, such as a federal law that bans kids under 13 from setting up accounts.

The dangers of social media for children have been highlighted by U.S. Surgeon General Dr. Vivek Murthy, who said the effects of social media require “immediate action to protect kids now.”

In addition to the 33 states suing, nine more state attorneys general are expected to join and file similar lawsuits.

Some information in this report came from The Associated Press and Reuters. 

your ads here!

Taiwan Computer Chip Workers Adjust to Life in American Desert

Phoenix, Arizona, in America’s Southwest, is the site of a Taiwanese semiconductor chip making facility. One part of President Joe Biden’s cornerstone agenda is to rely less on manufacturing from overseas and boost domestic production of chips that run everything from phones to cars. Many Taiwanese workers who moved to the U.S. to work at the facility — face the challenges of living in a new land. VOA’s Stella Hsu, Enming Liu and Elizabeth Lee have the story.

your ads here!

Governments, Firms Should Spend More on AI Safety, Top Researchers Say

Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper on Tuesday. 

The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. 

“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics. 

Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.

“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.

“It [investments in AI safety] needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.

Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell.

“There are more regulations on sandwich shops than there are on AI companies.” 

your ads here!

Kenyan Developers Launch App to Prevent Phone Theft

Kenyan developers have designed a mobile phone application that police say is helping to safeguard smartphones from theft, recover stolen cell phones and prevent loss of data. Victoria Amunga reports from Nairobi. Camera: Jimmy Makhulo

your ads here!

US Sounds Alarm on Russian Election Efforts

Russia’s efforts to discredit and undermine democratic elections appears to be expanding rapidly, according to newly declassified intelligence, spurred on by what the Kremlin sees as its success in disrupting the past two U.S. presidential elections.

The U.S. intelligence findings, shared in a diplomatic cable sent to more than 100 countries and obtained by VOA, are based on a review of Russian information operations between January 2020 and December 2022 that found Moscow “engaged in a concerted effort … to undermine public confidence in at least 11 elections across nine democracies.”

The review also found what the cable describes as “a less pronounced level of Russian messaging and social media activity” that targeted another 17 democracies.

“These figures represent a snapshot of Russian activities,” the cable warned. “Russia likely has sought to undermine confidence in democratic elections in additional cases that have gone undetected.

“Our information indicates that senior Russian government officials, including in the Kremlin, see value in this type of influence operation and perceive it to be effective,” the cable added.

VOA reached out to the Russian Embassy for comment on the cable warnings but so far has not received a response.

Russia has routinely denied allegations it interferes in foreign elections. However, last November, Wagner chief Yevgeny Prigozhin appeared to admit culpability for interfering in U.S. elections in a social media post.

“Gentlemen, we interfered, we interfere and we will interfere,” Prigozhin said.

U.S. officials assess that, in addition to Russia’s efforts to sow doubt surrounding the 2016 and 2020 elections in the United States, Russian campaigns have targeted countries in Asia, Europe, the Middle East and South America.

The goal, they say, is specifically to erode public confidence in election results and to paint the newly elected governments as illegitimate — using internet trolls, social media influencers, proxy websites linked to Russian intelligence and even Russian state-run media channels like RT and Sputnik.

And even though Russia’s resources have been strained due to its invasion of Ukraine, Moscow election interference efforts do not seem to be slowing down.

It is “a fairly low cost, low barrier to entry operation,” said a senior U.S. intelligence official, who spoke on the condition of anonymity in order to discuss the intelligence assessment.

“In many cases they’re amplifying existing domestic narratives that kind of question the integrity of elections,” the official said. “This is a very efficient use of resources. All they’re doing is magnifying claims that it’s unfair or it didn’t work or it’s chaotic.”

U.S. officials said they have started giving more detailed, confidential briefings to select countries that are being targeted by Russia. Some of the countries, they said, have likewise promised to share intelligence gathered from their own investigations.

Additionally, the cable makes a series of recommendations to counter the threat from the Russian disinformation campaigns, including for countries to expose, sanction and even expel any Russian officials involved in spreading misinformation or disinformation.

The cable also encourages democratic countries to engage in information campaigns to share factual information about their elections and to turn to independent election observers to assess and affirm the integrity of any elections.

your ads here!

Philippines Orders Military to Stop Using AI Apps Due to Security Risks

The Philippine defense chief has ordered all defense personnel and the 163,000-member military to refrain from using digital applications that harness artificial intelligence to generate personal portraits, saying they could pose security risks.

Defense Secretary Gilberto Teodoro Jr. issued the order in a Saturday memorandum, as Philippine forces have been working to weaken decades-old communist and Muslim insurgencies and defend territorial interests in the disputed South China Sea.

The Department of National Defense on Friday confirmed the authenticity of the memo, which has been circulating online in recent days, but did not provide other details, including what prompted Teodoro to issue the prohibition.

Teodoro specifically warned against the use of a digital app that requires users to submit at least 10 pictures of themselves and then harnesses AI to create “a digital person that mimics how a real individual speaks and moves.” Such apps pose “significant privacy and security risks,” he said.

“This seemingly harmless and amusing AI-powered application can be maliciously used to create fake profiles that can lead to identity theft, social engineering, phishing attacks and other malicious activities,” Teodoro said. “There has already been a report of such a case.”

Teodoro ordered all defense and military personnel “to refrain from using AI photo generator applications and practice vigilance in sharing information online” and said their actions should adhere to the Philippines Defense Department’s values and policies.

your ads here!

Chinese Netizens Post Hate-Filled Comments to Israeli Embassy’s Online Account

After the Hamas attack on Israel, the Israeli Embassy in Beijing began posting on China’s social media platform Weibo. The online effort to gain popular support appears to be backfiring as comments revile the Jewish state, applaud Hamas and praise Adolf Hitler.

The embassy’s account, which has 24 million followers, shows almost 100 posts since the Oct. 7 attack. Some are disturbing, such as an image of a baby’s corpse burnt in the attack. Others suggest Israeli resilience, such as the story of one person who was wounded at the Nova Festival but rescued several other music fans after the attack.

The comment areas have been flooded with hate speech such as “Heroic Hamas, good job!” and “Hitler was wise” referring to the German leader who orchestrated the deaths of 6 million Jews before and during World War II. Many people changed their Weibo avatars to the Israeli flag with a Nazi swastika in the middle.

Occasionally, someone expresses support for Israel and accuses Hamas of being a terrorist group. This triggers strong reactions from other netizens, such as “Only dead Israelis are good Israelis” and “the United States supports Israel, and the friend of the enemy is the enemy.”

Similar commentary has flooded sites elsewhere on China’s heavily censored internet.

VOA Mandarin could not determine how many of the Weibo accounts posting to the Israeli Embassy account belong to people who work for the Chinese government.

The Israeli Embassy in China did not respond to interview requests from VOA Mandarin.

Eric Liu, a former Weibo moderator who is now editor of China Digital Times, told VOA Mandarin the Israeli Embassy “has received more comments recently, which are very straightforwardly hateful, with antisemitic content. They probably have taken the initiative to contain it.”

Liu believes that because the antisemitic remarks remain online, that shows the Chinese government is comfortable with them. China has long backed the Palestinian cause but more recently it has also boosted ties with Israel as it seeks a larger role in trade, technology and diplomacy.

“It’s more of a voice influenced by public opinion,” he said. “Relatively speaking, it is an extreme voice. Moderate voices cannot be heard. Most of the participants are habitual offenders who hate others. But they are also spontaneous, or rather, they are spontaneous under the guidance” of the government censors.

Gu Guoping, a retired Shanghai teacher and human rights citizen-journalist, told VOA Mandarin, “I don’t go to Weibo, WeChat, or QQ. These are all anti-human brainwashing platforms controlled by the Chinese Communist Party. Due to the CCP’s long-term brainwashing and indoctrination of ordinary people, as well as internet censorship, many Weibo users … [confuse] right and wrong.”

“They don’t know Israel at all. The Israeli nation is an amazing, great, humane and civilized nation,” said Gu, who emphasized that Hamas killed innocent people in Israel first, and Israel’s counterattack was legitimate self-defense.

Liu said that Weibo moderators usually must delete hateful comments toward foreign embassies in China. However, they may receive instructions from the Cyberspace Administration of China and the State Council Information Office for major incidents, and different standards may be applied.

VOA Mandarin contacted the Chinese Embassy in Washington, Cyberspace Administration of China and the State Council Information Office for comment but did not receive a reply.

“The government’s opinion has been very, very clear, which is why the online public opinion has such an obvious tendency,” he said. “It must be the all-round propaganda machine that led the public opinion to be like this.”

While calling for a cease-fire in the Israel-Gaza conflict, Chinese officials have refused to condemn Hamas by name. Some observers say Beijing is exploiting the Israel-Hamas war to diminish U.S. influence.

On Saturday, China’s Foreign Minister Wang Yi condemned Israel for going “beyond the scope of self-defense” and called for it to “cease its collective punishment of the people of Gaza.”

When the Iranian Embassy in China posted comments by the Iranian president accusing the United States and Israel of causing the deadly explosion at the Ahli Arab Hospital, Chinese netizens posted their support.

U.S. President Joe Biden said during his visit to Tel Aviv on October 18 that the “intel” provided by his team regarding the hospital attack exonerated Israel. Israel said the militant group Islamic Jihad caused the blast that killed at least 100 people. The militant group that often works with Hamas has denied responsibility. Palestinian officials and several Arab leaders accuse Israel of hitting the hospital amid its ongoing airstrikes in Gaza.

The Weibo accounts of other foreign embassies and diplomats that have posted support for Israel have also been targeted by Chinese netizens. When the Swiss ambassador to China, Jürg Burri, posted on Oct. 13, “I send my deepest condolences to the victims and their families in the terrorist attacks in Gaza,” he was criticized for “pseudo-neutrality.”

“I don’t even want to wear a Swiss watch anymore! So angry,” said one netizen.

Liu believes the netizens’ support for Gaza will change.

“It’s not like that they stand with Palestine,” he said. “Maybe they will hate Palestine tomorrow because they believe in Islam. [The posters] are talking in general terms and do not care about the life and death of Palestine. Hatred of Israelis and Jews is the core.”

your ads here!

EU Opens Disinformation Probes into Meta, TikTok

The EU announced probes Thursday into Facebook owner Meta and TikTok, seeking more details on the measures they have taken to stop the spread of “illegal content and disinformation” after the Hamas attack on Israel.

The European Commission said it had sent formal requests for information to Meta and TikTok respectively in what is a first procedure launched under the EU’s new law on digital content.

The EU launched a similar probe into billionaire mogul Elon Musk’s social media platform X, formerly Twitter, last week.

The commission said the request to Meta related “to the dissemination and amplification of illegal content and disinformation” around the Hamas-Israel conflict.

In a separate statement, it said it wanted to know more about TikTok’s efforts against “the spreading of terrorist and violent content and hate speech”.

The EU’s executive arm added that it wanted more information from Meta on its “mitigation measures to protect the integrity of elections”.

Meta and TikTok have until October 25 to respond, with a deadline of November 8 for less urgent aspects of the demand for information.

The commission said it also sought more details about how TikTok was complying with rules on protecting minors online.

The European Union has built a powerful armory to challenge the power of big tech with its landmark Digital Services Act (DSA) and a sister law, the Digital Markets Act, that hits internet giants with tough new curbs on how they do business.

The EU’s fight against disinformation has intensified since Moscow’s invasion of Ukraine last year and Russian attempts to sway European public opinion.

The issue has gained further urgency after Hamas’ assault on October 7 on Israel and the aftermath which sparked a wave of violent images that flooded the platforms.

The DSA came into effect for “very large” platforms, including Meta and TikTok, that have more than 45 million monthly European users in August.

The DSA bans illegal online content under threat of fines running as high as six percent of a company’s global turnover.

The EU’s top tech enforcer, Thierry Breton, sent warning letters to tech CEOs including Meta’s Mark Zuckerberg, TikTok’s Shou Zi Chew and Sundar Pichai of YouTube owner Alphabet.

Growing EU fears

Breton, EU internal market commissioner, told the executives to crack down on illegal content following Hamas’ attack.

Meta said last week that it was putting special resources towards cracking down on illegal and problematic content related to the Hamas-Israel conflict.

On Wednesday, Breton expressed his fears over the impact of disinformation on the EU.

“The widespread dissemination of illegal content and disinformation… carries a clear risk of stigmatization of certain communities, destabilization of our democratic structures, not to mention the exposure of our children to violent content,” he said.

AFP fact-checkers have found several posts on Facebook, TikTok and X promoting a fake White House document purporting to allocate $8 billion in military assistance to Israel.

And several platforms have had users passing off material from other conflicts, or even from video games, as footage from Israel or Gaza.

Since the EU’s tougher action on digital behemoths, some companies, including Meta, are exploring whether to offer a paid-for version of their services in the European Union.

your ads here!

To Find Out How Wildlife Is Doing, Scientists Try Listening

A reedy pipe and a high-pitched trill duet against the backdrop of a low-pitched insect drone. Their symphony is the sound of a forest and is monitored by scientists to gauge biodiversity.

The recording from the forest in Ecuador is part of new research looking at how artificial intelligence could track animal life in recovering habitats.

When scientists want to measure reforestation, they can survey large tracts of land with tools like satellite and lidar.

But determining how fast and abundantly wildlife is returning to an area presents a more difficult challenge — sometimes requiring an expert to sift through sound recordings and pick out animal calls.

Jorg Muller, a professor and field ornithologist at University of Wurzburg Biocenter, wondered if there was a different way.

“I saw the gap that we need, particularly in the tropics, better methods to quantify the huge diversity… to improve conservation actions,” he told AFP.

He turned to bioacoustics, which uses sound to learn more about animal life and habitats.

It is a long-standing research tool, but more recently is being paired with computer learning to process large amounts of data more quickly.

Muller and his team recorded audio at sites in Ecuador’s Choco region ranging from recently abandoned cacao plantations and pastures to agricultural land recovering from use to old-growth forests.

They first had experts listen to the recordings and pick out birds, mammals and amphibians.

Then, they carried out an acoustic index analysis, which gives a measure of biodiversity based on broad metrics from a soundscape, like volume and frequency of noises.

Finally, they ran two weeks of recordings through an AI-assisted computer program trained to distinguish 75 bird calls.

More recordings needed

The program was able to pick out the calls on which it was trained in a consistent way, but could it correctly identify the relative biodiversity of each location?

To check this, the team used two baselines: one from the experts who listened to the audio recordings, and a second based on insect samples from each location, which offer a proxy for biodiversity.

While the library of available sounds to train the AI model meant it could only identify a quarter of the bird calls the experts could, it was still able to correctly gauge biodiversity levels in each location, the study said.

“Our results show that soundscape analysis is a powerful tool to monitor the recovery of faunal communities in hyperdiverse tropical forest,” said the research published Tuesday in the journal Nature Communications.

“Soundscape diversity can be quantified in a cost-effective and robust way across the full gradient from active agriculture to recovering and old-growth forests,” it added.

There are still shortcomings, including a paucity of animal sounds on which to train AI models.

And the approach can only capture species that announce their presence.

“Of course (there is) no information on plants or silent animals. However, birds and amphibians are very sensitive to ecological integrity, they are a very good surrogate,” Muller told AFP.

He believes the tool could become increasingly useful given the current push for “biodiversity credits” — a way of monetizing the protection of animals in their natural habitat.

“Being able to directly quantify biodiversity, rather than relying on proxies such as growing trees, encourages and allows external assessment of conservation actions, and promotes transparency,” the study said.

 

your ads here!