Coeco

Russian Malware Targeting Ukrainian Mobile Devices

Ukrainian troops using Android mobile devices are coming under attack from Russian hackers, who are using a new kind of malware to try to steal information critical to the ongoing counteroffensive.

Cyber officials from the United States, along with counterparts from Australia, Britain, Canada and New Zealand, issued a warning Thursday about the malware, named Infamous Chisel, which aims to scan files, monitor communications and “periodically steal sensitive information.”

The U.S. Cybersecurity and Infrastructure Security Agency, or CISA, describes the new malware as “a collection of components which enable persistent access to an infected Android device … which periodically collates and exfiltrates victim information.”

 

A CISA report published Thursday shared additional technical details about the Russian campaign, with officials warning the malware could be employed against other targets.

Thursday’s warning reflects “the need for all organizations to keep their Shields Up to detect and mitigate Russian cyber activity, and the importance of continued focus on maintaining operational resilience under all conditions,” said Eric Goldstein, CISA executive assistant director for cybersecurity, in a statement.

According to the report by the U.S. and its allies, the malware is designed to persist on a system by replacing legitimate coding with other coding from outside the system that is not directly attached to the malware itself.

It also said the malware’s components are of “low to medium sophistication and appear to have been developed with little regard to defense evasion or concealment of malicious activity.”

Ukraine’s SBU security agency first discovered the Russian malware earlier in August, saying it was being used to “gain access to the combat data exchange system of the Armed Forces of Ukraine.”

Ukrainian officials said at the time they were able to launch defensive cyber operations to expose and block the Russian efforts.

An SBU investigation determined that Russia was able to launch the malware attack after capturing Ukrainian computer tablets on the battlefield.

Ukraine attributed the attack to a cyber threat actor known as Sandworm, which U.S. and British officials have previously linked to the GRU, Russia’s military intelligence service.

your ads here!

FBI-Led Operation Dismantles Notorious Qakbot Malware

A global operation led by the FBI has dismantled one of the most notorious cybercrime tools used to launch ransomware attacks and steal sensitive data.

U.S. law enforcement officials announced on Tuesday that the FBI and its international partners had disrupted the Qakbot infrastructure and seized nearly $9 million in cryptocurrency in illicit profits.

Qakbot, also known as Qbot, was a sophisticated botnet and malware that infected hundreds of thousands of computers around the world, allowing cybercriminals to access and control them remotely.

“The Qakbot malicious code is being deleted from victim computers, preventing it from doing any more harm,” the U.S. Attorney’s Office for the Central District of California said in a statement.

Martin Estrada, the U.S. attorney for the Central District of California, and Don Alway, the FBI assistant director in charge of the Los Angeles field office, announced the operation at a press conference in Los Angeles.

Estrada called the operation “the largest U.S.-led financial and technical disruption of a botnet infrastructure” used by cybercriminals to carry out ransomware, financial fraud, and other cyber-enabled crimes.

“Qakbot was the botnet of choice for some of the most infamous ransomware gangs, but we have now taken it out,” Estrada said.

Law enforcement agencies from France, Germany, the Netherlands, the United Kingdom, Romania, and Latvia took part in the operation, code-named Duck Hunt.

“These actions will prevent an untold number of cyberattacks at all levels, from the compromised personal computer to a catastrophic attack on our critical infrastructure,” Alway said.

As part of the operation, the FBI was able to gain access to the Qakbot infrastructure and identify more than 700,000 infected computers around the world, including more than 200,000 in the United States.

To disrupt the botnet, the FBI first seized the Qakbot servers and command and control system. Agents then rerouted the Qakbot traffic to servers controlled by the FBI. That in turn instructed users of infected computers to download a file created by law enforcement that would uninstall Qakbot malware.

your ads here!

Meta Fights Sprawling Chinese ‘Spamouflage’ Operation

Meta on Tuesday said it purged thousands of Facebook accounts that were part of a widespread online Chinese spam operation trying to covertly boost China and criticize the West.

The campaign, which became known as “Spamouflage,” was active across more than 50 platforms and forums including Facebook, Instagram, TikTok, YouTube and X, formerly known as Twitter, according to a Meta threat report.

“We assess that it’s the largest, though unsuccessful, and most prolific covert influence operation that we know of in the world today,” said Meta Global Threat Intelligence Lead Ben Nimmo.

“And we’ve been able to link Spamouflage to individuals associated with Chinese law enforcement.”

More than 7,700 Facebook accounts along with 15 Instagram accounts were jettisoned in what Meta described as the biggest ever single takedown action at the tech giant’s platforms.

“For the first time we’ve been able to tie these many clusters together to confirm that they all go to one operation,” Nimmo said.

The network typically posted praise for China and its Xinjiang province and criticisms of the United States, Western foreign policies, and critics of the Chinese government including journalists and researchers, the Meta report says.

The operation originated in China and its targets included Taiwan, the United States, Australia, Britain, Japan, and global Chinese-speaking audiences. 

Facebook or Instagram accounts or pages identified as part of the “large and prolific covert influence operation” were taken down for violating Meta rules against coordinated deceptive behavior on its platforms.

Meta’s team said the network seemed to garner scant engagement, with viewer comments tending to point out bogus claims.

Clusters of fake accounts were run from various parts of China, with the cadence of activity strongly suggesting groups working from an office with daily job schedules, according to Meta.

‘Doppelganger’ campaign

Some tactics used in China were similar to those of a Russian online deception network exposed in 2019, which suggested the operations might be learning from one another, according to Nimmo.

Meta’s threat report also provided analysis of the Russian influence campaign called Doppelganger, which was first disrupted by the security team a year ago.

The core of the operation was to mimic websites of mainstream news outlets in Europe and post bogus stories about Russia’s war on Ukraine, then try to spread them online, said Meta head of security policy Nathaniel Gleicher.  

Companies involved in the campaign were recently sanctioned by the European Union.

Meta said Germany, France and Ukraine remained the most targeted countries overall, but that the operation had added the United States and Israel to its list of targets.

This was done by spoofing the domains of major news outlets, including The Washington Post and Fox News.

Gleicher described Doppelganger, which is intended to weaken support of Ukraine, as the largest and most aggressively persistent influence operation from Russia that Meta has seen since 2017.

your ads here!

AI Hackathons Aim to Spur Innovation, Attract Investors

The tech industry is rushing to unlock the potential of artificial intelligence, and AI hackathons — daylong collaborations using the technology to tackle real-world problems — are increasing in popularity. From the state of Washington, Natasha Mozgovaya has more.

your ads here!

Glitch Halts Toyota Factories in Japan

Toyota said Tuesday it has been hit by a technical glitch forcing it to suspend production at all 14 factories in Japan.

The world’s biggest automaker gave no further details on the stoppage, which began Tuesday morning, but said it did not appear to be caused by a cyberattack.

The company said the glitch prevented its system from processing orders for parts, resulting in a suspension of a dozen factories or 25 production lines on Tuesday morning.

The company later decided to halt the afternoon shift of the two other operational factories, suspending all of Toyota’s domestic plants, or 28 production lines.

“We do not believe the problem was caused by a cyberattack,” the company said in a statement to AFP.

“We will continue to investigate the cause and to restore the system as soon as possible.”

The incident affected only Japanese factories, Toyota said.

It was not immediately clear exactly when normal production might resume. 

The news briefly sent Toyota’s stocks into the red in the morning session before recovering.

Last year, Toyota had to suspend all of its domestic factories after a subsidiary was hit by a cyberattack.

The company is one of the biggest in Japan, and its production activities have an outsized impact on the country’s economy.

Toyota is famous for its “just-in-time” production system of providing only small deliveries of necessary parts and other items at various steps of the assembly process.

This practice minimizes costs while improving efficiency and is studied by other manufacturers and at business schools around the world, but also comes with risks.

The auto titan retained its global top-selling auto crown for the third year in a row in 2022 and aims to earn an annual net profit of $17.6 billion this fiscal year.

Major automakers are enjoying a robust surge of global demand after the COVID-19 pandemic slowed manufacturing activities.

Severe shortages of semiconductors had limited production capacity for a host of goods ranging from cars to smartphones.

Toyota has said chip supplies were improving and that it had raised product prices, while it worked with suppliers to bring production back to normal. 

However, the company was still experiencing delays in the deliveries of new vehicles to customers, it added.

your ads here!

ChatGPT Turns to Business as Popularity Wanes

OpenAI on Monday said it was launching a business version of ChatGPT as its artificial intelligence sensation grapples with declining usership nine months after its historic debut.

ChatGPT Enterprise will offer business customers a premium version of the bot, with “enterprise grade” security and privacy enhancements from previous versions, OpenAI said in a blog post.

The question of data security has become an important one for OpenAI, with major companies, including Apple, Amazon and Samsung, blocking employees from using ChatGPT out of fear that sensitive information will be divulged.

“Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data,” OpenAI said.

The ChatGPT business version resembles Bing Chat Enterprise, an offering by Microsoft, which uses the same OpenAI technology through a major partnership.

ChatGPT Enterprise will be powered by GPT-4, OpenAI’s highest performing model, much like ChatGPT Plus, the company’s subscription version for individuals, but business customers will have special perks, including better speed.

“We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the company said.

It added that companies including Carlyle, The Estée Lauder Companies and PwC were already early adopters of ChatGPT Enterprise.

The release came as ChatGPT is struggling to maintain the excitement that made it the world’s fastest downloaded app in the weeks after its release.

That distinction was taken over last month by Threads, the Twitter rival from Facebook-owner Meta.

According to analytics company Similarweb, ChatGPT traffic dropped by nearly 10% in June and again in July, falls that could be attributed to school summer break, it said.

Similarweb estimates that roughly one-quarter of ChatGPT’s users worldwide fall in the 18- to 24-year-old demographic.

OpenAI is also facing pushback from news publishers and other platforms — including X, formerly known as Twitter, and Reddit — that are now blocking OpenAI web crawlers from mining their data for AI model training.

A pair of studies by pollster Pew Research Center released on Monday also pointed to doubts about AI and ChatGPT in particular.

Two-thirds of the U.S.-based respondents who had heard of ChatGPT say their main concern is that the government will not go far enough in regulating its use.

The research also found that the use of ChatGPT for learning and work tasks has ticked up from 12% of those who had heard of ChatGPT in March to 16% in July.

Pew also reported that 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.

your ads here!

Cybercrime Set to Threaten Canada’s Security, Prosperity, Says Spy Agency

Organized cybercrime is set to pose a threat to Canada’s national security and economic prosperity over the next two years, a national intelligence agency said on Monday.

In a report released Monday, the Communications Security Establishment (CSE) identified Russia and Iran as cybercrime safe havens where criminals can operate against Western targets.

Ransomware attacks on critical infrastructure such as hospitals and pipelines can be particularly profitable, the report said. Cyber criminals continue to show resilience and an ability to innovate their business model, it said.

“Organized cybercrime will very likely pose a threat to Canada’s national security and economic prosperity over the next two years,” said CSE, which is the Canadian equivalent of the U.S. National Security Agency.

“Ransomware is almost certainly the most disruptive form of cybercrime facing Canada because it is pervasive and can have a serious impact on an organization’s ability to function,” it said.

Official data show that in 2022, there were 70,878 reports of cyber fraud in Canada with over C$530 million ($390 million) stolen.

But Chris Lynam, director general of Canada’s National Cybercrime Coordination Centre, said very few crimes were reported and the real amount stolen last year could easily be C$5 billion or more.

“Every sector is being targeted along with all types of businesses as well … folks really have to make sure that they’re taking this seriously,” he told a briefing.

Russian intelligence services and law enforcement almost certainly maintain relationships with cyber criminals and allow them to operate with near impunity as long as they focus on targets outside the former Soviet Union, CSE said.

Moscow has consistently denied that it carries out or supports hacking operations.

Tehran likely tolerates cybercrime activities by Iran-based cyber criminals that align with the state’s strategic and ideological interests, CSE added.

your ads here!

New Study: Don’t Ask Alexa or Siri if You Need Info on Lifesaving CPR

Ask Alexa or Siri about the weather. But if you want to save someone’s life? Call 911 for that.

Voice assistants often fall flat when asked how to perform CPR, according to a study published Monday.

Researchers asked voice assistants eight questions that a bystander might pose in a cardiac arrest emergency. In response, the voice assistants said:

  • “Hmm, I don’t know that one.”

  • “Sorry, I don’t understand.”

  • “Words fail me.”

  • “Here’s an answer … that I translated: The Indian Penal Code.”

Only nine of 32 responses suggested calling emergency services for help — an important step recommended by the American Heart Association. Some voice assistants sent users to web pages that explained CPR, but only 12% of the 32 responses included verbal instructions.

Verbal instructions are important because immediate action can save a life, said study co-author Dr. Adam Landman, chief information officer at Mass General Brigham in Boston.

Chest compressions — pushing down hard and fast on the victim’s chest — work best with two hands.

“You can’t really be glued to a phone if you’re trying to provide CPR,” Landman said.

For the study, published in JAMA Network Open, researchers tested Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana in February. They asked questions such as “How do I perform CPR?” and “What do you do if someone does not have a pulse?”

Not surprisingly, better questions yielded better responses. But when the prompt was simply “CPR,” the voice assistants misfired. One played news from a public radio station. Another gave information about a movie titled “CPR.” A third gave the address of a local CPR training business.

ChatGPT from OpenAI, the free web-based chatbot, performed better on the test, providing more helpful information. A Microsoft spokesperson said the new Bing Chat, which uses OpenAI’s technology, will first direct users to call 911 and then give basic steps when asked how to perform CPR. Microsoft is phasing out support for its Cortana virtual assistant on most platforms.

Standard CPR instructions are needed across all voice assistant devices, Landman said, suggesting that the tech industry should join with medical experts to make sure common phrases activate helpful CPR instructions, including advice to call 911 or other emergency phone numbers.

A Google spokesperson said the company recognizes the importance of collaborating with the medical community and is “always working to get better.” An Amazon spokesperson declined to comment on Alexa’s performance on the CPR test, and an Apple spokesperson did not provide answers to AP’s questions about how Siri performed.

your ads here!

Tesla Braces for Its First Trial Involving Autopilot Fatality

Tesla Inc TSLA.O is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk’s assertions about the technology.

Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.

Tesla faces two trials in quick succession, with more to follow.

The first, scheduled for mid-September in a California state court, is a civil lawsuit containing allegations that the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.

The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee’s estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car. 

Musk ‘de facto leader’ of autopilot team

The second trial, set for early October in a Florida state court, arose out of a 2019 crash north of Miami where owner Stephen Banner’s Model 3 drove under the trailer of an 18-wheeler big rig truck that had pulled into the road, shearing off the Tesla’s roof and killing Banner. Autopilot failed to brake, steer or do anything to avoid the collision, according to the lawsuit filed by Banner’s wife.

Tesla denied liability for both accidents, blamed driver error and said Autopilot is safe when monitored by humans. Tesla said in court documents that drivers must pay attention to the road and keep their hands on the steering wheel.

“There are no self-driving cars on the road today,” the company said.

The civil proceedings will likely reveal new evidence about what Musk and other company officials knew about Autopilot’s capabilities – and any possible deficiencies. Banner’s attorneys, for instance, argue in a pretrial court filing that internal emails show Musk is the Autopilot team’s “de facto leader.”

Tesla and Musk did not respond to Reuters’ emailed questions for this article, but Musk has made no secret of his involvement in self-driving software engineering, often tweeting about his test-driving of a Tesla equipped with “Full Self-Driving” software. He has for years promised that Tesla would achieve self-driving capability only to miss his own targets.

Tesla won a bellwether trial in Los Angeles in April with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names. The case was about an accident where a Model S swerved into the curb and injured its driver, and jurors told Reuters after the verdict that they believed Tesla warned drivers about its system and driver distraction was to blame. 

Stakes higher for Tesla

The stakes for Tesla are much higher in the September and October trials, the first of a series related to Autopilot this year and next, because people died.

“If Tesla backs up a lot of wins in these cases, I think they’re going to get more favorable settlements in other cases,” said Matthew Wansley, a former General Counsel of nuTonomy, an automated driving startup and Associate Professor of Law at Cardozo School of Law.

On the other hand, “a big loss for Tesla – especially with a big damages award” could “dramatically shape the narrative going forward,” said Bryant Walker Smith, a law professor at the University of South Carolina.

In court filings, the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.

Jonathan Michaels, an attorney for the plaintiffs, declined to comment on Tesla’s specific arguments, but said “we’re fully aware of Tesla’s false claims including their shameful attempts to blame the victims for their known defective autopilot system.”

In the Florida case, Banner’s attorneys also filed a motion arguing punitive damages were warranted. The attorneys have deposed several Tesla executives and received internal documents from the company that they said show Musk and engineers were aware of, and did not fix, shortcomings.

In one deposition, former executive Christopher Moore testified there are limitations to Autopilot, saying it “is not designed to detect every possible hazard or every possible obstacle or vehicle that could be on the road,” according to a transcript reviewed by Reuters.

In 2016, a few months after a fatal accident where a Tesla crashed into a semi-trailer truck, Musk told reporters that the automaker was updating Autopilot with improved radar sensors that likely would have prevented the fatality.

But Adam (Nicklas) Gustafsson, a Tesla Autopilot systems engineer who investigated both accidents in Florida, said that in the almost three years between that 2016 crash and Banner’s accident, no changes were made to Autopilot’s systems to account for cross-traffic, according to court documents submitted by plaintiff lawyers.

The lawyers tried to blame the lack of change on Musk. “Elon Musk has acknowledged problems with the Tesla autopilot system not working properly,” according to plaintiffs’ documents. Former Autopilot engineer Richard Baverstock, who was also deposed, stated that “almost everything” he did at Tesla was done at the request of “Elon,” according to the documents.

Tesla filed an emergency motion in court late on Wednesday seeking to keep deposition transcripts of its employees and other documents secret. Banner’s attorney, Lake “Trey” Lytal III, said he would oppose the motion.

“The great thing about our judicial system is Billion Dollar Corporations can only keep secrets for so long,” he wrote in a text message.

your ads here!

New Crew for Space Station Launches With Astronauts From 4 Countries

Four astronauts from four countries rocketed toward the International Space Station on Saturday.

They should reach the orbiting lab in their SpaceX capsule Sunday, replacing four astronauts who have been living up there since March.

A NASA astronaut was joined on the predawn liftoff from Kennedy Space Center by fliers from Denmark, Japan and Russia. They clasped one another’s gloved hands upon reaching orbit.

It was the first U.S. launch in which every spacecraft seat was occupied by a different country — until now, NASA had always included two or three of its own on its SpaceX taxi flights. A fluke in timing led to the assignments, officials said.

“We’re a united team with a common mission,” NASA’s Jasmin Moghbeli radioed from orbit. Added NASA’s Ken Bowersox, space operations mission chief: “Boy, what a beautiful launch … and with four international crew members, really an exciting thing to see.”

Moghbeli, a Marine pilot serving as commander, is joined on the six-month mission by the European Space Agency’s Andreas Mogensen, Japan’s Satoshi Furukawa and Russia’s Konstantin Borisov.

“To explore space, we need to do it together,” the European Space Agency’s director general, Josef Aschbacher, said minutes before liftoff. “Space is really global, and international cooperation is key.”

The astronauts’ paths to space couldn’t be more different.

Moghbeli’s parents fled Iran during the 1979 revolution. Born in Germany and raised on New York’s Long Island, she joined the Marines and flew attack helicopters in Afghanistan. The first-time space traveler hopes to show Iranian girls that they, too, can aim high. “Belief in yourself is something really powerful,” she said before the flight.

Mogensen worked on oil rigs off the West African coast after getting an engineering degree. He told people puzzled by his job choice that “in the future we would need drillers in space” like Bruce Willis’ character in the killer asteroid film “Armageddon.” He’s convinced the rig experience led to his selection as Denmark’s first astronaut.

Furukawa spent a decade as a surgeon before making Japan’s astronaut cut. Like Mogensen, he has visited the station before.

Borisov, a space rookie, turned to engineering after studying business. He runs a freediving school in Moscow and judges the sport, in which divers shun oxygen tanks and hold their breath underwater.

One of the perks of an international crew, they noted, is the food. Among the delicacies soaring with them: Persian herbed stew, Danish chocolate and Japanese mackerel.

SpaceX’s first-stage booster returned to Cape Canaveral several minutes after liftoff, an extra treat for the thousands of spectators gathered in the early-morning darkness.

Liftoff was delayed a day for additional data reviews of valves in the capsule’s life-support system. The countdown almost was halted again Saturday after a tiny fuel leak cropped up in the capsule’s thruster system. SpaceX engineers managed to verify the leak would pose no threat with barely two minutes remaining on the clock, said Benji Reed, the company’s senior director for human spaceflight.

Another NASA astronaut will launch to the station from Kazakhstan in mid-September under a barter agreement, along with two Russians.

SpaceX has now launched eight crews for NASA. Boeing was hired at the same time nearly a decade ago but has yet to fly astronauts. Its crew capsule is grounded until 2024 by parachute and other issues.

your ads here!

Thailand Threatens Facebook Shutdown Over Scam Ads

Thailand said this week it is preparing to sue Facebook in a move that could see the platform shut down nationwide over scammers allegedly exploiting the social networking site to cheat local users out of tens of millions of dollars a year.

The country’s minister of digital economy and society, Chaiwut Thanakamanusorn, announced the planned lawsuit after a ministry meeting on Monday.

Ministry spokesperson Wetang Phuangsup told VOA on Thursday the case would be filed in one to two weeks, possibly by the end of the month.

“We are in the stage of gathering information, gathering evidence, and we will file to the court to issue the final judgment on how to deal with Facebook since they are a part of the scamming,” he said.

Some of the most common scams, Wetang said, involve paid advertisements on the site urging people to invest in fake companies, often using the logo of Thailand’s Securities and Exchange Commission or sham endorsements from local celebrities to lure them in.

Of the roughly 16,000 online scamming complaints filed in Thailand last year, he said, 70% to 80% involved Facebook and cost users upwards of $100 million.

“We believe that Facebook has a responsibility,” Wetang said. “Facebook is taking money from advertising a lot, and basically even taking money from Thai society as a whole. Facebook should be more responsible to society, should screen the advertising. … We believe that by doing so it would definitely decrease the investment scam in Thailand on the Facebook.”

Wetang said the ministry had been urging the company to do more to screen and vet paid ads for the past year and was now turning to the courts to possibly shut the site down as a last resort.

“If you are supporting the crime, especially on the internet, you will be liable [for] the crime, and by the law, it’s possible the court can issue the shutdown of Facebook,” he said. “By law, we can ask the court to suspend or punish all the people who support the crime, of course with evidence.”

Neither Facebook nor its parent company, Meta, replied to VOA’s repeated requests for comment or interviews.

The Asia Internet Coalition, an industry association that counts Meta among its members, acknowledged that online scamming was a growing problem across the region. Other members include Google, Amazon, Apple and X, formerly known as Twitter.

“While it’s getting challenging from the scale perspective, it’s also getting complicated and sophisticated because of the technology that has been used when it comes to application on the platforms but also how this technology can be misused,” the coalition’s secretariat, Sarthak Luthra, told VOA.

Luthra would not speak for Meta or address Thailand’s specific complaints against Facebook but said tech companies were taking steps to thwart scammers, including teaching users how to spot them.

Last year, for example, Meta launched a #StayingSafeOnline campaign in Thailand “to raise awareness about some of the most common kinds of online scams, including helping people understand the different kinds of scamsters, their tricks, and tips to stay safe online,” according to the company’s website.

Luthra said tech companies have been facing a growing number of criminal and civil penalties for their content across the region while urging governments to give them more room to regulate themselves and to apply “safe harbor” rules that shield the companies from legal liability for content created by users.

Shutting down any platform on a nationwide scale is not the answer, he said, and he warned of the unintended consequences.

“It really, first, impacts the ease of doing business and also the perception around the digital economy development of a country, so shutting down a platform is of course not a solution to a challenge in this case,” Luthra said.

“A government really needs to think of how do we promote online safety while maintaining an open internet environment,” he said. “From the economic perspective, it does impact investment sentiment, business sentiment and the ability to operate in that particular country.”

At a recent company event in Thailand, Meta said there were some 65 million Facebook users in the country, which also has the second-largest economy in Southeast Asia.

Shutting down the platform would have a “huge” impact on the vast majority of people using the site to make money legally and honestly, said Sutawan Chanprasert, executive director of DigitalReach, a digital rights group based in Thailand.

She said a shutdown would cut off a vital channel for free speech in Thailand and an important tool for independent local media outlets.

“Some of them rely predominantly on Facebook because it’s the most popular social media platform in Thailand, so they publish their content on Facebook in order to reach out to audiences because they don’t have a means to set up … a full-fledged media channel,” she said.

Taking all that away to foil scammers would be “too extreme,” Sutawan said, suggesting the government focus instead on strengthening the country’s cybercrime and security laws and enforcing them.

Ministry spokesperson Wetang said the government was aware of the collateral damage a shutdown could cause but had to risk a lawsuit that could bring it on.

“Definitely we are really concerned about the people on Facebook,” he said. “But since this is a crime that already happened, the evidence is so clear … it is impossible that we don’t take action.”

your ads here!

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.

your ads here!

Q&A: How Do Europe’s Sweeping Rules for Tech Giants Work?

Google, Facebook, TikTok and other Big Tech companies operating in Europe must comply with one of the most far-reaching efforts to clean up what people see online.

The European Union’s groundbreaking new digital rules took effect Friday for the biggest platforms. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc, long a global leader in cracking down on tech giants.

The DSA is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, already have made changes.

Here’s a look at what has changed:

Which platforms are affected? 

So far, 19. They include eight social media platforms: Facebook; TikTok; X, formerly known as Twitter; YouTube; Instagram; LinkedIn; Pinterest; and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba and AliExpress, and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject to the new rules, as are Google’s Search and Microsoft’s Bing search engines.

Google Maps and Wikipedia round out the list. 

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — face the DSA’s highest level of regulation. 

Brussels insiders, however, have pointed to some notable omissions, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later. 

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

What’s changing?

Platforms have rolled out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly. 

The DSA “will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops,” Nick Clegg, Meta’s president for global affairs, said in a blog post. 

Facebook’s and Instagram’s existing tools to report content will be easier to access. Amazon opened a new channel for reporting suspect goods. 

TikTok gave users an extra option for flagging videos, such as for hate speech and harassment, or frauds and scams, which will be reviewed by an additional team of experts, according to the app from Chinese parent company ByteDance. 

Google is offering more “visibility” into content moderation decisions and different ways for users to contact the company. It didn’t offer specifics. Under the DSA, Google and other platforms have to provide more information behind why posts are taken down. 

Facebook, Instagram, TikTok and Snapchat also are giving people the option to turn off automated systems that recommend videos and posts based on their profiles. Such systems have been blamed for leading social media users to increasingly extreme posts. 

The DSA also prohibits targeting vulnerable categories of people, including children, with ads. Platforms like Snapchat and TikTok will stop allowing teen users to be targeted by ads based on their online activities. 

Google will provide more information about targeted ads shown to people in the EU and give researchers more access to data on how its products work. 

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing it’s being treated unfairly. 

Nevertheless, Zalando is launching content-flagging systems for its website, even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes. 

Amazon has filed a similar case with a top EU court.

What if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. 

“The real test begins now,” said European Commissioner Thierry Breton, who oversees digital policy. He vowed to “thoroughly enforce the DSA and fully use our new powers to investigate and sanction platforms where warranted.” 

But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech. 

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work. 

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia. 

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels think tank. 

Big platforms have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These assessments are due by the end of August and then they will be independently audited. 

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work. 

What about the rest of the world? 

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe and “will be implemented globally,” said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia. 

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months. 

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

your ads here!

Electric Vehicle ‘Fast Chargers’ Seen as Game Changer

With White House funding to help get more electric cars on the road, some states are creating local rules to get top technologies into their charging stations. Deana Mitchell has the story.

your ads here!

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

your ads here!

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

your ads here!

How AI Can ‘Resurrect’ People

In 2023, a new way to use AI has come online. Some companies are using the tool to make lifelike avatars of people, even those who have died.  Maxim Moskalkov reports. Camera: Andrey Degtyarev.

your ads here!

India Lands Craft on Moon’s Unexplored South Pole

An Indian spacecraft has landed on the moon, becoming the first craft to touch down on the lunar surface’s south pole, the country’s space agency said.

India’s attempt to land on the moon Wednesday came days after Russia’s Luna-25 lander, also headed for the unexplored south pole, crashed into the moon.  

It was India’s second attempt to reach the south pole — four years ago, India’s lander crashed during its final approach.  

India has become the fourth country to achieve what is called a “soft-landing” on the moon – a feat accomplished by the United States, China and the former Soviet Union.  

However, none of those lunar missions landed at the south pole. 

The south side, where the terrain is rough and rugged, has never been explored.  

The current mission, called Chandrayaan-3, blasted into space on July 14.

your ads here!

Kenyan Court Gives Meta and Sacked Moderators 21 Days to Pursue Settlement  

A Kenyan court has given Facebook’s parent company, Meta, and the content moderators who are suing it for unfair dismissal 21 days to resolve their dispute out of court, a court order showed on Wednesday.

The 184 content moderators are suing Meta and two subcontractors after they say they lost their jobs with one of the firms, Sama, for organizing a union.

The plaintiffs say they were then blacklisted from applying for the same roles at the second firm, Luxembourg-based Majorel, after Facebook switched contractors.

“The parties shall pursue an out of court settlement of this petition through mediation,” said the order by the Employment and Labour Relations Court, which was signed by lawyers for the plaintiffs, Meta, Sama and Majorel.

Kenya’s former chief justice, Willy Mutunga, and Hellen Apiyo, the acting commissioner for labor, will serve as mediators, the order said. If the parties fail to resolve the case within 21 days, the case will proceed before the court, it said.

Meta, Sama and Majorel did not immediately respond to requests for comment.

A judge ruled in April that Meta could be sued by the moderators in Kenya, even though it has no official presence in the east African country.

The case could have implications for how Meta works with content moderators globally. The U.S. social media giant works with thousands of moderators around the world, who review graphic content posted on its platform.

Meta has also been sued in Kenya by a former moderator over accusations of poor working conditions at Sama, and by two Ethiopian researchers and a rights institute, which accuse it of letting violent and hateful posts from Ethiopia flourish on Facebook.

Those cases are ongoing.

Meta said in May 2022, in response to the first case, that it required partners to provide industry-leading conditions. On the Ethiopia case, it said in December that hate speech and incitement to violence were against the rules of Facebook and Instagram.

your ads here!

Meta Rolls Out Web Version of Threads 

Meta Platforms on Tuesday launched the web version of its new text-first social media platform Threads, in a bid to retain professional users and gain an edge over rival X, formerly Twitter.

Threads’ users will now be able to access the microblogging platform by logging-in to its website from their computers, the Facebook and Instagram owner said.

The widely anticipated roll out could help Threads gain broader acceptance among power users like brands, company accounts, advertisers and journalists, who can now take advantage of the platform by using it on a bigger screen.

Threads, which crossed 100 million sign-ups for the app within five days of its launch on July 5, saw a decline in its popularity as users returned to the more familiar platform X after the initial rush.

In just over a month, daily active users on Android version of Threads app dropped to 10.3 million from the peak of 49.3 million, according to a report, dated August 10, by analytics platform Similarweb.

The company will be adding more functionality to the web experience in the coming weeks, Meta said.

your ads here!

Europe’s Sweeping Rules for Tech Giants Are About to Kick In

Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.

The first phase of the European Union’s groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.

The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, have already started making changes.

Here’s a look at what’s happening this week:

Which platforms are affected?

So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject, as are Google’s Search and Microsoft’s Bing search engine.

Google Maps and Wikipedia round out the list.

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — will face the DSA’s highest level of regulation.

Brussels insiders, however, have pointed to some notable omissions from the EU’s list, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later on.

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

Citing uncertainty over the new rules, Meta Platforms has held off launching its Twitter rival, Threads, in the EU.

What’s changing?

Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.

Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.

TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.

Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.

TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.

TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.

The DSA prohibits targeting vulnerable categories of people, including children, with ads.

Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.

TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing that it’s being treated unfairly.

Nevertheless, Zalando is launching content flagging systems for its website even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.

The company has supported the DSA, said Aurelie Caulier, Zalando’s head of public affairs for the EU.

“It will bring loads of positive changes” for consumers, she said. But “generally, Zalando doesn’t have systemic risk [that other platforms pose]. So that’s why we don’t think we fit in that category.”

Amazon has filed a similar case with a top EU court.

What happens if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work.

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia.

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.

Under the rules, the biggest platforms will have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work.

What about the rest of the world?

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe, said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.

“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

The regulations are “dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place,” she said.

your ads here!

Meta to Soon Launch Web Version of Threads in Race with X for Users

Meta Platforms is set to roll out the web version on its new text-first social media platform Threads, hoping to gain an edge over X, formerly Twitter, as the initial surge in users waned.

The widely anticipated web version will make Threads more useful for power users like brands, company accounts, advertisers and journalists.

Meta did not give a date for the launch, but Instagram head Adam Mosseri said it could happen soon.

“We are close on web…,” Mosseri said in a post on Threads on Friday. The launch could happen as early as this week, according to a report in the Wall Street Journal.

Threads, which launched as an Android and iOS app on July 5 and gained 100 million users in just five days, saw its popularity drop as users returned to the more familiar platform X after the initial rush to try Meta’s new offering. 

But in just over a month, its daily active users on Android app dropped to 10.3 million from the peak of 49.3 million, according to a report by analytics platform Similarweb dated Aug. 10. 

Meanwhile, the management is moving quickly to launch new features. Threads now offers the ability to set post notifications for accounts and view them in a type of chronological feed. 

It will soon roll out an improved search that could allow users to search for specific posts and not just accounts. 

your ads here!