Economy

economy news

Australia Activates First Renewable Power Station on Decommissioned Coal Plant Site

The first large-scale battery to be built at an Australian coal site has been switched on in Victoria’s Latrobe Valley, east of Melbourne.

The 150-megawatt battery is at the site of the former Hazelwood power station in the southern Australian state of Victoria. The station was built in the 1960s and closed in 2017.

The new battery was officially opened Wednesday and has the ability to power about 75,000 homes for an hour during the evening peak. The decommissioned coal plant produced 10 times more electricity, but the battery’s operators aim to increase its generating capacity over time.

The Latrobe Valley has been the center of Victoria’s coal-fired power industry for decades, but the region is changing.

The new battery will store power generated by offshore wind farms and is run by the French energy giant Engie, and its partners Eku Energy and Fluence.

Engie chief executive Rik De Buyserie told reporters it is an important part of Australia’s green energy future.

“The commissioning of this battery represents a key milestone in this journey and marks an important step in the transition of the La Trobe Valley from a thermal energy power to a clean energy power provider,” he said.

The state of Victoria aims to have at least 2.6 gigawatts of battery storage connected to the electricity grid by 2030 and 6.3 gigawatts by 2035.

Lily D’Ambrosio, Victoria’s minister for climate action, energy and resources, told reporters that the state government is committed to boosting its renewable energy sector.

“It is important that we just do not sit around waiting for old technology to disappear, close down, but we actually get in front of it and make sure that we have more than sufficient supply to meet our needs,” she said. “That is what keeps downward pressure on prices.”

Australia has legislated a target to cut carbon emissions by 43% from 2005 levels by 2030 and to achieve net zero emissions by 2050.

Electricity generation in Australia is still dominated by coal and gas but there is a distinct shift to renewable sources of power.

In April the Clean Energy Council, an industry association, said that clean energy accounted for 35.9% of Australia’s total electricity generation in 2022, up from 32.5% in 2021.

your ads here!

US Energy Dept., Other Agencies Hacked

U.S. security officials say the U.S. Energy Department and several other federal agencies have been hacked by a Russian cyber-extortion gang.

Homeland Security officials said Thursday the agencies were caught up in the hacking of MOVEit  Transfer, a file-transfer program that is popular with governments and corporations.

The Energy Department said two of its entities were “compromised” in the hack.

The Russia-linked extortion group CI0p, which claimed responsibility for the hacking, said last week on the dark web site that its victims had until Wednesday to negotiate a ransom or risk having sensitive information dumped online.  It added that it would delete any data stolen from governments, cities and police departments.

Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, said while the intrusion was “largely an opportunistic one” that was superficial and caught quickly, her agency was “very concerned about this campaign and working on it with urgency.”

Reuters reports that the Britain’s Shell Oil Company, the University of Georgia, Johns Hopkins University and the Johns Hopkins Health System were also among those targeted in the hacking campaign. The Associated Press quoted a senior CISA official as saying U.S. military and intelligence agencies were not affected.

MOVEit said it is working with the federal agencies and its other customers to help fix their systems.

Information for this report was provided by The Associated Press and Reuters.  

your ads here!

Experts Divided as YouTube Reverses Policy on Election Misinformation

An announcement by YouTube that it will no longer remove content containing misinformation on the U.S. 2020 presidential election has some experts divided.

In a June blog post, YouTube said it was ending its policy — enforced since December 2020 — that removed tens of thousands of videos that falsely claimed the 2020 election was impaired by “widespread fraud, errors or glitches.”

“We find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm,” the post said.

The Google-owned platform says the move is to support free speech, but some experts in tech and disinformation say it could allow harmful content to again be easily shared.

“The message that YouTube is sending is that the election denial crowd is now welcome again on YouTube and can resume its campaign of undermining trust in American elections and democratic institutions,” said Paul Barrett, deputy director at New York University’s Stern Center for Business and Human Rights.

But others say the policy caused “legitimate” content to be removed and that the core issue is a wider societal problem, not something confined to YouTube.

YouTube’s other election misinformation policies remain unchanged, the platform said.

These include prohibiting content aimed at misleading people about the time and place for voting and claims that could significantly discourage voting.

Google spokesperson Ivy Choi told VOA in an email that the company has “nothing to add beyond what we shared in our blog post.”

Still, some U.S. lawmakers and experts are concerned about how harmful content circulates on YouTube.

Representative Zoe Lofgren, who sat on the House January 6 committee, said the idea that election denial disinformation is “no longer harmful — including that they do not increase the risk of violence — is simply wrong.”

“The lies continue to have a dramatic impact on our democracy and on the drastic increase in threats faced by elected officials at all levels of government,” Lofgren told VOA in an emailed statement.

Lofgren, a Democrat from California, added that YouTube’s parent company Alphabet should reconsider its decision.

Justin Hendrix, founder and editor of the nonprofit website Tech Policy Press, questioned whether YouTube’s policy had even been successful.

“There is, to me, a bigger question about whether YouTube was ever really effectively removing information that promoted false claims about the 2020 election,” Hendrix told VOA. “I wonder whether this is a capitulation to the reality that the company was never able to effectively take action against false claims in the 2020 election.”

YouTube is one of the most popular social media platforms in the United States, and it has over 2 billion users around the world.

But despite the platform’s popularity, it has escaped the level of scrutiny given to Twitter and Facebook, according to Barrett. The main reason: the difficulty in analyzing videos in bulk.

YouTube is the main place people go for videos on innocuous things like how to fix your car or do your makeup, said Barrett. “But it’s also the go-to place for video for people with extreme political ideas,” he added.

Videos on YouTube amplified the false narratives that the 2020 election was rigged and that the entire American election system is corrupt, according to a 2022 report Barrett and Hendrix co-authored, A Platform ‘Weaponized’: How YouTube Spreads Harmful Content – And What Can Be Done About It.

Election misinformation was also cited by the January 6 committee as it investigated the circumstances that resulted in a mob of former President Donald Trump’s supporters storming the U.S. Capitol on the day the election results were due to be certified.

In a report on the insurrection, the committee said the platform “included efforts to boost authoritative content” and that it “labeled election fraud claims — but did so anemically.”

Some free-speech experts like Jennifer Stisa Granick, the surveillance and cybersecurity counsel at the American Civil Liberties Union, believe the policy change is good.

“There have been some legitimate discussions about voting and the legitimacy of the election that have been adversely impacted” under the former policy, Granick said.

“Election disinformation was not spread by YouTube or other online platforms, but by [Trump] himself. And the misinformation that circulates online is a drop in the bucket compared to what the [former] president of the United States says,” Granick said.

The bigger problem, she said, is that for some political candidates, “election denial is a fundamental part of their campaigns.”

People who complain that YouTube is evading its responsibility are “looking to the platform to solve a social and political problem that the United States has,” Granick said.

Roy Gutterman, director of the Tully Center for Free Speech at Syracuse University, believes any policy that openly fosters free speech is worthwhile.

“But calls to violence, which may accompany some of this discourse, would still not be protected,” Gutterman told VOA.

Barrett, however, is concerned that the reversal creates the potential for YouTube to be exploited.

The broader effect, Barrett said, “is the erosion of trust more generally” — not just in American elections.

Studies have shown that exposure to misinformation and disinformation is tied to lower trust in the media.

The YouTube policy change is hardly the main cause of that process, Barrett said, but it’s a contributing factor.

The policy change comes as several major social media companies face criticism for failing to quell election misinformation and disinformation on their platforms. The recent development with YouTube is part of a broader trend in the tech industry, according to Hendrix.

“I’m concerned that we’re seeing across the board almost a kind of throwing up the hands around some of these issues,” he said, pointing to staff layoffs, including those in trust and safety departments.

All of these factors contribute to “an erosion of even more than democracy,” Barrett said. “That’s an erosion of the social connections that hold society together.”

your ads here!

Security Firm: Suspected Chinese Hackers Breached Hundreds of Networks Globally

Suspected state-backed Chinese hackers used a security hole in a popular email security appliance to break into the networks of hundreds of public and private sector organizations globally, nearly a third of them government agencies including foreign ministries, the U.S. cybersecurity firm Mandiant said Thursday.

“This is the broadest cyber espionage campaign known to be conducted by a China-nexus threat actor since the mass exploitation of Microsoft Exchange in early 2021,” Charles Carmakal, Mandiant’s chief technical officer, said in an emailed statement. That hack compromised tens of thousands of computers globally.

In a blog post Thursday, Google-owned Mandiant expressed “high confidence” that the group exploiting a software vulnerability in Barracuda Networks’ Email Security Gateway was engaged in “espionage activity in support of the People’s Republic of China.” It said the activity began as early as October.

The hackers sent emails containing malicious file attachments to gain access to targeted organizations’ devices and data, Mandiant said. Of those organizations, 55% were from the Americas, 22% from the Asia Pacific region and 24% from Europe, the Middle East and Africa, and they included foreign ministries in Southeast Asia and foreign trade offices and academic organizations in Taiwan and Hong Kong. the company said.

Mandiant said the majority impact in the Americas may partially reflect the geography of Barracuda’s customer base.

Barracuda announced on June 6 that some of its email security appliances had been hacked as early as October, giving the intruders a back door into compromised networks. The hack was so severe that the California company recommended fully replacing the appliances.

After discovering it in mid-May, Barracuda released containment and remediation patches, but the hacking group, which Mandiant identifies as UNC4841, altered its malware to try to maintain access, Mandiant said. The group then “countered with high-frequency operations targeting a number of victims located in at least 16 different countries.”

Blinken trip

Word of the breach comes as U.S. Secretary of State Antony Blinken departs for China this weekend as part of the Biden administration’s push to repair deteriorating ties between Washington and Beijing.

His visit had initially been planned for early this year but was postponed indefinitely after the discovery and shootdown of what the U.S. said was a Chinese spy balloon over the United States.

Mandiant said the targeting at both the organizational and individual account levels focused on issues that are high policy priorities for China, particularly in the Asia Pacific region. It said the hackers searched for email accounts of people working for governments of political or strategic interest to China at the time they were participating in diplomatic meetings with other countries.

In an emailed statement Thursday, Barracuda said about 5% of its active Email Security Gateway appliances worldwide showed evidence of potential compromise. It said it was providing replacement appliances to affected customers at no cost.

The U.S. government has accused Beijing of being its principal cyber espionage threat, with state-backed Chinese hackers stealing data from both the private and public sector.

In terms of raw intelligence affecting the U.S., China’s largest electronic infiltrations have targeted OPM, Anthem, Equifax and Marriott.

Earlier this year, Microsoft said state-backed Chinese hackers have been targeting U.S. critical infrastructure and could be laying the technical groundwork for the potential disruption of critical communications between the U.S. and Asia during future crises.

China says the U.S. also engages in cyber espionage against it, hacking into computers of its universities and companies.

your ads here!

Chinese EV Makers Make Progress in Bid to Dominate British Market

Chinese manufacturers of electric vehicles are stepping up their push to dominate the European market. As Amy Guttman reports from London, they are making progress in Britain, where car shoppers are eager to buy the lower-cost electric cars that Chinese automakers are offering.

your ads here!

Bill Gates Visits China for Health, Development Talks

Microsoft Founder Bill Gates was in China on Thursday for what he said were meetings with global health and development partners who have worked with his charitable foundation.

“Solving problems like climate change, health inequity and food insecurity requires innovation,” Gates tweeted. “From developing malaria drugs to investing in climate adaptation, China has a lot of experience in that. We need to unlock that kind of progress for more people around the world.”

Gates said global crises stifled progress in reducing death and poverty in children and that he will next travel to West Africa because African countries are particularly vulnerable “with high food prices, crushing debt, and increasing rates of TB and malaria.”

Reuters, citing two people familiar with the matter, said Gates would meet with Chinese President Xi Jinping.

Gates is the latest business figure to visit China year, following Apple’s Tim Cook and Tesla’s Elon Musk.

Some information for this report came from The Associated Press, Agence France-Presse and Reuters.

your ads here!

Cambodian Facial Recognition Effort Raises Fears of Misuse

Experts are raising concerns that a recent Cambodian government order allocating around $1 million to a local company for a facial recognition technology project could pave the way for the technology to be used against citizens and human rights defenders.

The order, signed by Prime Minister Hun Sen and released in March in a recent tranche of government documents, would award the funds to HSC Co. Ltd., a Cambodian company led by tycoon Sok Hong that has previously printed Cambodian passports and installed CCTV cameras in Phnom Penh, Cambodia’s capital.

The Oct. 17 order appears to be the first direct indication of Cambodia’s interest in pursuing facial recognition, alarming experts who say such initiatives could eventually be used to target dissenters and build a stronger surveillance state similar to China’s. In recent months, the government has blocked the country’s main opposition party from participating in the July national elections, shut down independent media and jailed critics such as labor organizers and opposition politicians.

Neither the Interior Ministry nor the company would answer questions about what the project entails.

“This is national security and not everyone knows about how it works,” Khieu Sopheak, secretary of state and spokesperson for the Interior Ministry, told VOA by phone. “Even in the U.S., if you ask about the air defense system, they will tell you the same. This is the national security system, which we can’t tell everyone [about].”

The order names HSC, a company Sok Hong founded in 2007, as the funds’ recipient. HSC’s businesses span food and beverage, dredging and retail.

HSC also has close ties to the government: in addition to printing passports and providing CCTV cameras in Phnom Penh, it runs the system for national ID cards and has provided border checkpoint technology. Malaysian and Cambodian media identify Sok Hong as the son of Sok Kong, another tycoon who founded the conglomerate Sokimex Investment Group. Both father and son are oknhas or “lords,” a Cambodian honorific given to those who have donated more than $500,000 to the government.

When reached by phone, Sok Hong told VOA, “I think it shouldn’t be reported since it is related to national security.”

Cambodia’s history of repression, including monitoring dissidents in person and online, has raised suspicions that it could deploy such technology to target activists. Last year, labor leaders reported they were recorded via drones during protests.

“Authorities can use facial recognition technology to identify, track individuals and gather vast amounts of personal data without their consent, which could eventually lead to massive surveillance,” said Chak Sopheap, director of the Cambodian Center for Human Rights. “For instance, when a government uses facial recognition to monitor attendance at peaceful gatherings, these actions raise severe concerns about the safety of those citizens.”

In addition, giving control of facial recognition technology to a politically connected firm, and one that already has access to a trove of identity-related information, could centralize citizens’ data in a one-stop shop. That could make it easier to fine-tune algorithms quickly and later develop more facial recognition tools to be shared with the government in a mutually beneficial relationship, Joshua Kurlantzick, Council on Foreign Relations senior fellow for Southeast Asia, told VOA.

China — one of Cambodia’s oldest and closest allies — has pioneered collecting vast amounts of data to monitor citizens. In Xinjiang, home to about 12 million Uyghurs, Chinese authorities combine people’s biometric data and digital activities to create a detailed portrait of their lives.

In recent years, China has sought to influence Southeast Asia, “providing an explicit model for surveillance and a model for a closed and walled-garden internet,” Kurlantzick said, referring to methods of blocking or managing users’ access to certain content.

Some efforts have been formalized under the Digital Silk Road, China’s technology-focused subset of the Belt and Road initiative that provides support, infrastructure and subsidized products to recipient countries.

China’s investment in Cambodian monitoring systems dates back to the early days of the Digital Silk Road. In 2015, it installed an estimated $3 million worth of CCTV cameras in Phnom Penh and later promised more cameras to “allow a database to accumulate for the investigation of criminal cases,” according to reports at the time. There is no indication China is involved in the HSC project, however.

While dozens of countries use facial recognition technology for legitimate public safety uses, such investments must be accompanied by strict data protection laws and enforcement, said Gatra Priyandita, a cyber politics analyst at the Australian Strategic Policy Institute.

Cambodia does not have comprehensive data privacy regulations. The prime minister himself has monitored Zoom calls hosted by political foes, posting on Facebook that “Hun Sen’s people are everywhere.”

Given the country’s approach to digital privacy, housing facial recognition within a government-tied conglomerate is “concerning” but not surprising, Priyandita said.

“The long-term goal of these kinds of arrangements is the reinforcement of regime security, of course, particularly the protection of Cambodia’s main political and business families,” Priyandita said.

In the immediate future, Cambodia’s capacity to carry out mass surveillance is uncertain. The National Internet Gateway — a system for routing traffic through government servers which critics compared to China’s “Great Firewall” — was delayed in early 2022. Shortly before the scheduled rollout, the government advertised more than 100 positions related to data centers and artificial intelligence, sowing doubts about the technical knowledge behind the project.

Still, the government is pushing to strengthen its digital capabilities, fast-tracking controversial laws around cybercrime and cybersecurity and pursuing a 15-year plan to develop the digital economy, including a skilled technical workforce.

Sun Narin of VOA’s Khmer Service contributed to this report.

your ads here!

As Deepfake Fraud Permeates China, Authorities Target Political Challenges Posed By AI

Chinese authorities are cracking down on political and fraud cases driven by deepfakes, created with face- and voice-changing software that tricks targets into believing they are video chatting with a loved one or another trusted person.

How good are the deepfakes? Good enough to trick an executive at a Fuzhou tech company in Fujian province who almost lost $600,000 to a person he thought was a friend claiming to need a quick cash infusion.

The entire transaction took less than 10 minutes from the first contact via the phone app WeChat to police stopping the online bank transfer when the target called the authorities after learning his real friend had never requested the loan, according to Sina Technology.

Despite the public’s outcry about such AI-driven fraud, some experts say Beijing appears more concerned about the political challenges that deepfakes may pose, as shown by newly implemented regulations on “deep synthesis” management that outlaw activities that “endanger national security and interests and damage the national image.”

The rapid development of artificial intelligence technology has propelled cutting-edge technology to mass entertainment applications in just a few years.

In a 2017 demonstration of the risks, a video created by University of Washington researchers showed then-U.S. President Barack Obama saying things he hadn’t.

Two years later, Chinese smartphone apps like Zao let users swap their faces with celebrities so they could appear as if they were in a movie. Zao was removed from app stores in 2019 and Avatarify, another popular Chinese face-swapping app, was also banned in 2021, likely for violation of privacy and portrait rights, according to Chinese media.

Pavel Goldman-Kalaydin, head of artificial intelligence and machine learning at SumSub, a Berlin-based global antifraud company, explained how easy it is with a personal computer or smartphone to make a video in which a person appears to say things he or she never would.

“To create a deepfake, a fraudster uses a real person’s document, taking a photo of it and turning it into a 3D persona,” he said. “The problem is that the technology, it is becoming more and more democratized. Many people can use it. … They can create many deepfakes, and they try to bypass these checks that we try to enforce.”

Subbarao Kambhampati, professor at the School of Computing and Augmented Intelligence at Arizona State University, said in a telephone interview he was surprised by the apparent shift from voice cloning to deepfake video calling by scammers in China. He compared that to a rise in voice-cloning phone scams in the U.S.

“Audio alone, you’re more easily fooled, but audio plus video, it would be little harder to fool you. But apparently they’re able to do it,” Kambhampati said, adding that it is harder to make a video that appears trustworthy.

“Subconsciously we look at people’s faces … and realize that they’re not exactly behaving the way we normally see them behave in terms of their facial expressions.”

Experts say that AI fraud will become more sophisticated.

“We don’t expect the problem to go away. The biggest solution … is education, let people understand the days of trusting your ears and eyes are over, and you need to keep that in the back of your mind,” Kambhampati said.

The Internet Society of China issued a warning in May, calling on the public to be more vigilant as AI face-changing, voice-changing scams and slanders became common.

The Wall Street Journal reported on June 4 that local governments across China have begun to crack down on false information generated by artificial intelligence chatbots. Much of the false content designed as clickbait is similar to authentic material on topics that have already attracted public attention.

To regulate “deep synthesis” content, China’s administrative measures implemented on January 10 require service providers to “conspicuously mark” AI-generated content that “may cause public confusion or misidentification” so that users can tell authentic media content from deepfakes.

China’s practice of requiring technology platforms to “watermark” deepfake content has been widely discussed internationally.

Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace, noted that deepfake regulations place the onus on the companies that develop and operate these technologies.

“If enforced well, the regulations could make it harder for criminals to get their hands on these AI tools,” he said in an email to VOA Mandarin. “It could throw up some hurdles to this kind of fraud.”

But he also said that much depends on how Beijing implements the regulations and whether bad actors can obtain AI tools outside China.

“So, it’s not a problem with the technology,” said SumSub’s Goldman-Kalaydin. “It is always a problem with the usage of the technology. So, you can regulate the usage, but not the technology.”

James Lewis, senior vice president of the strategic technologies program at the Center for Strategic and International Studies in Washington, told VOA Mandarin, “Chinese law needs to be modernized for changes in technology, and I know the Chinese are thinking about that. So, the cybercrime laws you have will probably catch things like deepfakes. What will be hard to handle is the volume and the sophistication of the new products, but I know the Chinese government is very worried about fraud and looking for ways to get control of it.”

Others suggest that in regulating AI, political stability is a bigger concern for the Chinese government.

“I think they have a stronger incentive to work on the political threats than they do for fraud,” said Bill Drexel, an associate fellow for the Technology and National Security Program at Center for a New American Security.

In May, the hashtag #AIFraudEruptingAcrossChina was trending on China’s social media platform Weibo. However, the hashtag has since been censored, according to the Wall Street Journal, suggesting authorities are discouraging discussion on AI-driven fraud.

“So even we can see from this incident, once it appeared that the Chinese public was afraid that there was too much AI-powered fraud, they censored,” Drexel told VOA Mandarin.

He continued, “The fact that official state-run media initially reported these incidents and then later discussion of it was censored just goes to show that they do ultimately care about covering themselves politically more than they care about addressing fraud.”

Adrianna Zhang contributed to this report.

your ads here!

Bill Gates in China to Meet President Xi on Friday – Sources 

Bill Gates, Microsoft Corp’s co-founder, is set to meet Chinese President Xi Jinping on Friday during his visit to China, two people with knowledge of the matter said.

The meeting will mark Xi’s first meeting with a foreign private entrepreneur in recent years. The people said the encounter may be a one-on-one meeting. A third source confirmed they would meet, without providing details.

The sources did not say what the two might discuss. Gates tweeted on Wednesday that he had landed in Beijing for the first time since 2019 and that he would meet with partners who had been working on global health and development challenges with the Bill & Melinda Gates Foundation.

The foundation and China’s State Council Information Office, which handles media queries on behalf of the Chinese government, did not immediately respond to Reuters requests for comment. 

Gates stepped down from Microsoft’s board in 2020 to focus on philanthropic works related to global health, education and climate change. He quit his full-time executive role at Microsoft in 2008. 

The last reported meeting between Xi and Gates was in 2015, when they met on the sidelines of the Boao forum in Hainan province. In early 2020, Xi wrote a letter to Gates thanking him, and the Bill & Melinda Gates Foundation, for pledging assistance to China including $5 million for its fight against COVID. 

The meeting would mark the end of a long hiatus by Xi in recent years from meeting foreign private entrepreneurs and business leaders, after the Chinese president stopped traveling abroad for nearly three years as China shut its borders during the pandemic. 

Several foreign CEOs have visited China since it reopened early this year but most have mainly met with government ministers. 

Premier Li Qiang met a group of CEOs including Apple’s Tim Cook in March and a source told Reuters that Tesla’s Elon Musk met vice-premier Ding Xuexiang last month.

your ads here!

EU Lawmakers Vote for Tougher AI Rules as Draft Moves to Final Stage

EU lawmakers on Wednesday voted for tougher landmark draft artificial intelligence rules that include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

The lawmakers agreed to the amendments to the draft legislation proposed by the European Commission which is seeking to set a global standard for the technology used in everything from automated factories to bots and self-driving cars.

Rapid adoption of Microsoft-backed OpenAI’s ChatGPT and other bots has led top AI scientists and company executives including Elon Musk and OpenAI CEO Sam Altman to raise the potential risks posed to society.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” said Brando Benifei, co-rapporteur of the draft act.

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on “high-risk application” to do a fundamental rights impact assessment and evaluate environmental impact.

Microsoft, which has called for AI rules, welcomed the lawmakers’ agreement.

“We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said.

However, the Computer and Communications Industry Association said the amendments on high-risk AIs were likely to overburden European AI developers with “excessively prescriptive rules” and slow down innovation.

“AI raises a lot of questions – socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” EU industry chief Thierry Breton said.

The Commission announced its draft rules two years ago aimed at setting a global standard for a technology key to almost every industry and business and in a bid to catch up with AI leaders the United States and China.

The lawmakers will now have to thrash out details with European Union countries before the draft rules become legislation. 

your ads here!

EU Regulators Order Google To Break up Digital Ad Business Over Competition Concerns

European Union antitrust regulators took aim at Google’s lucrative digital advertising business in an unprecedented decision ordering the tech giant to sell off some of its ad business to address competition concerns.

The European Commission, the bloc’s executive branch and top antitrust enforcer, said that its preliminary view after an investigation is that “only the mandatory divestment by Google of part of its services” would satisfy the concerns.

The 27-nation EU has led the global movement to crack down on Big Tech companies, but it has previously relied on issuing blockbuster fines, including three antitrust penalties for Google worth billions of dollars.

It’s the first time the bloc has ordered a tech giant to split up keys of business.

Google can now defend itself by making its case before the commission issues its final decision. The company didn’t immediately respond to a request for comment.

The commission’s decision stems from a formal investigation that it opened in June 2021, looking into whether Google violated the bloc’s competition rules by favoring its own online display advertising technology services at the expense of rival publishers, advertisers and advertising technology services.

YouTube was one focus of the commission’s investigation, which looked into whether Google was using the video sharing site’s dominant position to favor its own ad-buying services by imposing restrictions on rivals.

Google’s ad tech business is also under investigation by Britain’s antitrust watchdog and faces litigation in the U.S.

Brussels has previously hit Google with more than $8.6 billion worth of fines in three separate antitrust cases, involving its Android mobile operating system and shopping and search advertising services.

The company is appealing all three penalties. An EU court last year slightly reduced the Android penalty to 4.125 million euros. EU regulators have the power to impose penalties worth up to 10% of a company’s annual revenue.

your ads here!

Big Amazon Cloud Services Recovering After Outage Hits Thousands of Users

Amazon.com said cloud services offered by its unit Amazon Web Services were recovering after a big disruption on Tuesday affected websites of the New York Metropolitan Transportation Authority and The Boston Globe, among others.

Several hours after Downdetector.com started showing reports of outages, Amazon said many AWS services were fully recovered and marked resolved.

“We are continuing to work to fully recover all services,” AWS’ status page showed.

Tuesday’s impact stretching from transportation to financial services businesses underscores adoption of Amazon’s younger Lambda service and the degree to which many of its cloud offerings are crucial to companies in the internet age.

According to research in the past year from the cloud company Datadog, more than half of organizations operating in the cloud use Lambda or rival services, known as serverless technology.

Nearly 12,000 users had reported issues with accessing the service, according to Downdetector, which tracks outages by collating status reports from a number of sources, including user-submitted errors on its platform.

The disruption appeared smaller in time and breadth than one the company suffered in 2017 of its data-hosting service known as Amazon S3, representing the bread and butter of its cloud business.

The outage appeared to extend to AWS’s own webpage describing disruptions in its operations, which at one point failed to load on Tuesday, Reuters witnesses saw.

“We quickly narrowed down the root cause to be an issue with a subsystem responsible for capacity management for AWS Lambda, which caused errors directly for customers and indirectly through the use by other AWS services,” Amazon said.

AWS Lambda is a service that lets customers run computer programs without having to manage any underlying servers.

Twitter users expressed their frustration with the outage, with one user saying, “I don’t know, Alexa won’t tell me because #AWS and her services are down!”

Delta Air Lines also said it was facing problems but did not say if it was related to the AWS outage. The company did not immediately respond to a request for comment.

Other Amazon services such as Amazon Music and Alexa were also impacted, according to Downdetector.

Amazon had its last major outage in December 2021, when disruptions to its cloud services temporarily knocked out streaming platforms Netflix and Disney+, Robinhood, and Amazon’s e-commerce website ahead of Christmas.

your ads here!

India Denies Dorsey’s Claims It Threatened to Shut Down Twitter

India threatened to shut Twitter down unless it complied with orders to restrict accounts critical of the government’s handling of farmer protests, co-founder Jack Dorsey said, an accusation Prime Minister Narendra Modi’s government called an “outright lie.”

Dorsey, who quit as Twitter CEO in 2021, said on Monday that India also threatened the company with raids on employees if it did not comply with government requests to take down certain posts.

“It manifested in ways such as: ‘We will shut Twitter down in India’, which is a very large market for us; ‘we will raid the homes of your employees’, which they did; And this is India, a democratic country,” Dorsey said in an interview with YouTube news show Breaking Points.

Deputy Minister for Information Technology Rajeev Chandrasekhar, a top ranking official in Modi’s government, lashed out against Dorsey in response, calling his assertions an “outright lie.”

“No one went to jail nor was Twitter ‘shut down’. Dorsey’s Twitter regime had a problem accepting the sovereignty of Indian law,” he said in a post on Twitter.

Dorsey’s comments again put the spotlight on the struggles faced by foreign technology giants operating under Modi’s rule. His government has often criticized Google, Facebook and Twitter for not doing enough to tackle fake or “anti-India” content on their platforms, or for not complying with rules.

The former Twitter CEO’s comments drew widespread attention as it is unusual for global companies operating in India to publicly criticize the government. Last year, Xiaomi in a court filing said India’s financial crime agency threatened its executives with “physical violence” and coercion, an allegation which the agency denied.

Dorsey also mentioned similar pressure from governments in Turkey and Nigeria, which had restricted the platform in their nations at different points over the years before lifting those bans.

Twitter was bought by Elon Musk in a $44 billion deal last year.

Chandrasekhar said Twitter under Dorsey and his team had repeatedly violated Indian law. He didn’t name Musk, but added Twitter had been in compliance since June 2022.

Big tech vs Modi

Modi and his ministers are prolific users of Twitter, but free speech activists say his administration resorts to excessive censorship of content it thinks is critical of its working. India maintains its content removal orders are aimed at protecting users and sovereignty of the state.

The public spat with Twitter during 2021 saw Modi’s government seeking an “emergency blocking” of the “provocative” Twitter hashtag “#ModiPlanningFarmerGenocide” and dozens of accounts. Farmers’ groups had been protesting against new agriculture laws at the time, one of the biggest challenges faced by the Modi government.

The government later gave in to the farmers’ demands. Twitter initially complied with the government requests but later restored most of the accounts, citing “insufficient justification”, leading to officials threatening legal consequences.

In subsequent weeks, police visited a Twitter office as part of another probe linked to tagging of some ruling party posts as manipulated. Twitter at the time said it was worried about staff safety.

Dorsey in his interview said many India content take down requests during the farmer protests were “around particular journalists that were critical of the government.”

Since Modi took office in 2014, India has slid from 140th in World Press Freedom Index to 161 this year, out of 180 countries, its lowest ranking ever.

your ads here!

Startup Firm Leads Kenya Into World of High-Tech Manufacturing

A three-year-old startup company is leading Kenya into the world of high-tech manufacturing, building a workforce capable of making semiconductors and nanotechnology products that operate modern devices from mobile phones to refrigerators. 

Anthony Githinji is the founder of Semiconductors Technologies Limited, or STL, located in Nyeri, about a three hours’ drive from Nairobi. 

He brought his know-how to Kenya from the United States, where he started work in 1997 on semiconductors — materials that conduct electricity and are used in thousands of products. 

He said the biggest barrier to entry in any high-tech business is finding a workforce with the right skills. In deciding to start a business in Kenya, his country of origin, Githinji said a meeting with the vice-chancellor of Dedan Kimathi University of Science and Technology, also known as DEKUT, was a game changer. 

“DEKUT and STL formed a partnership that allowed for us to engage STEM-related education and develop it, tool it and orient it toward our specific industry, which is the semiconductor and microchip space and so we started attaching students and having internships through STL, and it became very clear and very quickly that the level and caliber of the education system and the product of DEKUT, I believe most institutions of higher learning in Kenya are very high level,” Githinji said.

Female engineers

STL employs about 100 engineers, 70 percent of them women.

Irene Ngetich, a process engineer with a background in telecommunications and electrical engineering, graduated from DEKUT in 2019. She said she entered the STEM (Science, Technology, Engineering, and Mathematics) sector after reading an article recommended by her father about another woman in the field. 

“So, when I read through [it] … she mentioned that in her class there were only two ladies. Of course, I love doing challenging things; so that stood out for me,” she said.

Ngetich said the company’s goal is “to be the leading [computer] chip manufacturer in Africa.” 

Semiconductors are used in almost every sector of electronics. In consumer electronics, for example, they are used in microwave ovens, refrigerators, mobile phones, laptops, and video game consoles. 

Lorna Muturi, a mechatronics engineer who will be graduating from DEKUT this year, is just 22 years old, but already has been working at STL for two years.

“We build the semiconductor manufacturing machine within the plant and as a mechatronics engineer, I am involved in the automation of the system; [and] also involved in the diagnostics of the system in case there’s an issue,” she explained about her job.

Muturi said that at STL, she works with people who are comfortable with her and accept her as a woman engineer. Now she’s able to go out and inspire others to join the STEM field.

STL CEO Githinji said the company prides itself on being gender overbalanced on the female side. He said the company turned out that way because of an extremely vigilant human resource development program.

“What you see at STL, whether it’s deliberate or inadvertent, is the result of pretty rigorous attention to the human resource capacity of the individual. It so turns out that these young women in STEM at STL have a very compelling story to tell. They are extremely intelligent, they are doing exceptionally well, training very well and they are producing very well,” he told VOA.

He added, “We also do have a lot of young men who perform very well and are exceptional in what they do.”

Looking ahead

Githinji said the company is not profitable yet.

“We are still in the phase of building capacity, so there’s a lot of expense that sinks into creating that capacity,” he said. “The good news, though, is that we have customers, we have products, we have the view that these products are going to be more and more adaptable and compelling in the marketplace.”

The company is working to establish relationships with other universities in Kenya, such as Strathmore and University of Eldoret, as well as in Uganda and Rwanda.

Githinji said he has also established a foundation named after his mother and his mother-in-law, with a goal of empowering under-privileged girls through STEM. With partners, he has built a computer lab in a remote village near Mount Kenya with about 20 workstations so kids and their families can benefit. 

your ads here!

AI Chatbots Offer Comfort to the Bereaved

Staying in touch with a loved one after their death is the promise of several start-ups using the powers of artificial intelligence, though not without raising ethical questions.

Ryu Sun-yun sits in front of a microphone and a giant screen, where her husband, who died a few months earlier, appears.

“Sweetheart, it’s me,” the man on the screen tells her in a video demo. In tears, she answers him, and a semblance of conversation begins.

When Lee Byeong-hwal learned he had terminal cancer, the 76-year-old South Korean asked startup DeepBrain AI to create a digital replica using several hours of video.

“We don’t create new content” such as sentences that the deceased would have never uttered or at least written and validated during their lifetime, said Joseph Murphy, head of development at DeepBrain AI, about the “Rememory” program.

“I’ll call it a niche part of our business. It’s not a growth area for us,” he cautioned.

The idea is the same for StoryFile, a company that uses 92-year-old “Star Trek” actor William Shatner to market its site.

“Our approach is to capture the wonder of an individual, then use the AI tools,” said Stephen Smith, boss of StoryFile, which claims several thousand users of its Life service.

Entrepreneur Pratik Desai caused a stir a few months ago when he suggested people save audio or video of “your parents, elders and loved ones,” estimating that by “the end of this year” it would be possible to create an autonomous avatar of a deceased person, and that he was working on a project to this end.

The message posted on Twitter set off a storm, to the point that, a few days later, he denied being “a ghoul.”

“This is a very personal topic and I sincerely apologize for hurting people,” he said.

“It’s a very fine ethical area that we’re taking with great care,” Smith said.

After the death of her best friend in a car accident in 2015, Russian engineer Eugenia Kyuda, who emigrated to California, created a “chatbot” named Roman like her dead friend, which was fed with thousands of text messages he had sent to loved ones.

Two years later Kyuda launched Replika, which offers personalized conversational robots, among the most sophisticated on the market.

But despite the Roman precedent, Replika “is not a platform made to recreate a lost loved one,” a spokesperson, said.

Somnium Space, based in London, wants to create virtual clones while users are still alive so that they then can exist in a parallel universe after their death.

“It’s not for everyone,” CEO Artur Sychov conceded in a video posted on YouTube about his product, Live Forever, which he is announcing for the end of the year.

“Do I want to meet my grandfather who’s in AI? I don’t know. But those who want that will be able to,” he added.

Thanks to generative AI, the technology is there to allow avatars of departed loved ones to say things they never said when they were alive.

“I think these are philosophical challenges, not technical challenges,” said Murphy of DeepBrainAI.

“I would say that is a line right now that we do not plan on crossing, but who knows what the future holds?” he added.

“I think it can be helpful to interact with an AI version of a person in order to get closure — particularly in situations where grief was complicated by abuse or trauma,” Candi Cann, a professor at Baylor University who is currently researching this topic in South Korea.

Mari Dias, a professor of medical psychology at Johnson & Wales University, has asked many of her bereaved patients about virtual contact with their loved ones.

“The most common answer is ‘I don’t trust AI. I’m afraid it’s going to say something I’m not going to accept.’ … I get the impression that they think they don’t have control” over what the avatar does.

your ads here!

Apple, Defying the Times, Stays Quiet on AI

Resisting the hype, Apple defied most predictions this week and made no mention of artificial intelligence when it unveiled its latest slate of new products, including its Vision Pro mixed reality headset.

Generative AI has become the tech world’s biggest buzzword since Microsoft-backed OpenAI released ChatGPT late last year, revealing the capabilities of the emerging technology. 

ChatGPT opened the world’s eyes to the idea that computers can churn out complex, human-level content using simple prompts, giving amateurs the talents of tech geeks, artists or speechwriters. 

Apple has laid low as Microsoft and Google raced out announcements on how generative AI will revolutionize its products, from online search to word processing and retouching images.

During the recent earnings season, tech CEOs peppered mentions of AI into their every phrase, eager to reassure investors that they wouldn’t miss Silicon Valley’s next big chapter.

Apple has chosen to be much more discreet and, in its closely watched keynote address to the World Developers conference in California, never once mentioned AI specifically.

“Apple ghosts the generative AI revolution,” said a headline in Wired Magazine after the event. 

‘Not necessarily AI?’

Arguments vary on why Apple has chosen a more subtle approach. 

For one, Apple follows other critics who have long been wary of the catchall “AI” term believing that it is too vague and unhelpfully evokes dystopian nightmares of killer robots and human subjugation to machines. 

For this reason, some companies – including TikTok or Facebook’s Meta – roll out AI innovations, but without necessarily touting them as such. 

“We do integrate it into our products [but] people don’t necessarily think about it as AI,” Apple CEO Tim Cook told ABC News this week.

Indeed, AI was actually very much part of Apple’s annual jamboree on Monday, but it required a level of technical know-how to notice.

In one instance, Apple’s head of software said “on-device machine learning” would enhance autocorrect for iPhone messaging when he could have just as well said AI.

Apple’s autocorrect innovation drew giggles with the promise of iPhones no longer correcting common expletives.

“In those moments where you just want to type a ‘ducking’ word, well, the keyboard will learn it, too,” said Craig Federighi.

Autocorrect will also learn from your writing style, helping it guide suggestions, using AI technology similar to what powers ChatGPT.

In another example, a new iPhone app called Journal, an interactive diary, would use “on-device machine learning … to inspire your writing,” Apple said, again not referring to AI when other companies would have.

But AI will also play a major role in the Vision Pro headset when it is released next year, helping, for example, generate a user’s digital persona for video-conferencing.

‘Not much effort’

For some analysts, the non-mention of AI is an acknowledgement by Apple that it lost ground against rivals. 

“They haven’t put much effort into it,” independent tech analyst Rob Enderle told AFP. 

“I think they just kind of felt that AI was off into the future and it wasn’t anything surprising,” he added. 

The glitchy performance of Apple’s chatbot Siri, which was launched a decade ago, has also fed the feeling that the smartphone giant doesn’t get AI. 

“I think most people would agree that Apple lost its edge with Siri. That’s probably the most obvious way they fell behind,” said Insider Intelligence principal analyst Yory Wurmser. 

But Wurmser also insisted that Apple is primarily a device company and that AI, which is software, will always be “the means rather than the ends for a great user experience” on its premium devices.

In this vein, for analyst Dan Ives of Wedbush Securities, the release of Apple’s Vision Pro headset was in itself an AI play, even if it wasn’t explicitly spelled out that way.

“We continue to strongly believe this is the first step in a broader strategy for Apple to build out a generative AI driven app ecosystem” on the Vision Pro, he said. 

your ads here!

Financial Institutions in US, East Asia Spoofed by Suspected North Korean Hackers

There are renewed concerns North Korea’s army of hackers is targeting financial institutions to prop up the regime in Pyongyang and possibly fund its weapons programs.

A report published Tuesday by the cybersecurity firm Recorded Future finds North Korean aligned actors have been spoofing well-known financial firms in Japan, Vietnam and the United States, sending out emails and documents that, if opened, could grant the hackers access to critical systems.

“The targeting of investment banking and venture capital firms may expose sensitive or confidential information of these entities or their customers,” according to the report by Recorded Future’s Insikt Group.

“[It] may result in legal or regulatory action, jeopardize pending business negotiations or agreements, or expose information damaging to the company’s strategic investment portfolio,” it said.

The report said the most recent cluster of activity took place between September 2022 and March 2023, making use of three new internet addresses and two old addresses, and more than 20 domain names.

Some of the domains imitated those used by the targeted financial institutions.

Recorded Future’s named the group behind the attacks Threat Activity Group 71 (TAG-71), which is also known as APT38, Bluenoroff, Stardust Chollima and the Lazarus Group.

This past April, the U.S. sanctioned three individuals associated with the Lazarus Group, accusing them of helping North Korea launder stolen virtual currencies and turn it into cash.

U.S. Treasury officials levied additional sanctions just last month against North Korea’s Technical Reconnaissance Bureau, which develops tools and operations to be carried out by the Lazarus Group.

The Lazarus Group is believed to be responsible for the largest theft of virtual currency to date, stealing approximately $620 million connected to a popular online game in Match 2022.

Earlier this month, U.S. and South Korean agencies issued a warning about another set of North Korean cyber actors impersonating think tanks, academic institutions and journalists in an ongoing attempt to collect intelligence.

 

your ads here!

Japan, Australia, US to Fund Undersea Cable Connection in Micronesia to Counter China’s Influence

Japan announced Tuesday that it joined the United States and Australia in signing a $95 million undersea cable project that will connect East Micronesia island nations to improve networks in the Indo-Pacific region where China is increasingly expanding its influence.

The approximately 2,250-kilometer (1,400-mile) undersea cable will connect the state of Kosrae in the Federated State of Micronesia, Tarawa in Kiribati and Nauru to the existing cable landing point located in Pohnpei in Micronesia, according to the Japanese Foreign Ministry.

Japan, the United States and Australia have stepped up cooperation with the Pacific Islands, apparently to counter efforts by Beijing to expand its security and economic influence in the region.

In a joint statement, the parties said next steps involve a final survey and design and manufacturing of the cable, whose width is about that of a garden hose. The completion is expected around 2025.

The announcement comes just over two weeks after leaders of the Quad, a security alliance of Japan, the United States, Australia and India, emphasized the importance of undersea cables as a critical component of communications infrastructure and the foundation for internet connectivity.

“Secure and resilient digital connectivity has never been more important,” Matthew Murray, a senior official in the U.S. State Department’s Bureau of East Asian and Pacific Affairs, said in a statement. “The United States is delighted to be part of this project bringing our region closer together.”

NEC Corp., which won the contract after a competitive tender, said the cable will ensure high-speed, high-quality and more secure communications for residents, businesses and governments in the region, while contributing to improved digital connectivity and economic development.

The cable will connect more than 100,000 people across the three Pacific countries, according to Kazuya Endo, director general of the international cooperation bureau at the Japanese Foreign Ministry.

 

your ads here!

Musk Says China Detailed Plans to Regulate AI

Top Chinese officials told Elon Musk about plans to launch new regulations on artificial intelligence on his recent trip to the Asian giant, the tech billionaire said Monday, in his first comments on the two-day visit.

The Twitter owner and Tesla CEO — one of the world’s richest men — held meetings with senior officials in Beijing and employees in Shanghai last week.

“Something that is worth noting is that on my recent trip to China, with the senior leadership there, we had, I think, some very productive discussions on artificial intelligence risks, and the need for some oversight or regulation,” Musk said. “And my understanding from those conversations is that China will be initiating AI regulation in China.”

Praised China

Musk, whose extensive interests in China have long raised eyebrows in Washington, spoke about the exchange in a livestreamed Twitter discussion with Democratic presidential hopeful and vaccine conspiracy theorist Robert Kennedy Jr., the nephew of the late U.S. President John F. Kennedy.

Musk did not tweet while in China and Tesla has not released readouts of Musk’s meeting with officials.

But official Chinese channels said he lavished praise on the country, including for its “vitality and promise,” and expressed “full confidence in the China market.”

Several Chinese companies have been rushing to develop AI services that can mimic human speech since San Francisco-based OpenAI launched ChatGPT in November.

But rapid advancements have stoked global alarm over the technology’s potential for disinformation and misuse.

Musk didn’t elaborate on his discussions in China but was likely referring to a sweeping draft law requiring new AI products to undergo a security assessment before release and a process ensuring that they reflect “core socialist values.”

The “Administrative Measures for Generative Artificial Intelligence Services” edict bans content promoting “terrorist or extremist propaganda,” “ethnic hatred” or “other content that may disrupt economic and social order.”

Under Beijing’s highly centralized political system, the measures are almost certain to become law.

Describes meetings as ‘promising’

Musk has caused controversy by suggesting the self-ruled island of Taiwan should become part of China — a stance that was welcomed by Chinese officials but which deeply angered Taipei.

The 51-year-old South African native described his meetings in China as “very promising.”

“I pointed out that if there is a digital super intelligence that is overwhelmingly powerful, developed in China, it is actually a risk to the sovereignty of the Chinese government,” he said. “And I think they took that concern to heart.”

your ads here!

Is It Real or Made by AI? Europe Wants a Label as It Fights Disinformation 

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises “fresh challenges for the fight against disinformation.”

Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc’s voluntary agreement on combating disinformation to dedicate efforts to tackling the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won’t take effect for several years.

Officials in the EU, which is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative artificial intelligence.

The voluntary commitments in the disinformation code will soon become legal obligations under the EU’s Digital Services Act, which will force the biggest tech companies by the end of August to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most of those digital giants are already signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

“Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

your ads here!

App Offering Government Services to Ukrainians Expands Reach

In collaboration with the Ukrainian government, the U.S. Agency for International Development, or USAID, has created an app that connects Ukrainians with their government so they can access public services — and use of the app’s code has expanded to different countries. Iryna Matviichuk has the story, narrated by Anna Rice.

your ads here!

Amazon to Pay $31 Million in Privacy Violation Penalties for Alexa Voice Assistant, Ring Camera

Amazon agreed Wednesday to pay a $25 million civil penalty to settle Federal Trade Commission allegations it violated a child privacy law and deceived parents by keeping for years kids’ voice and location data recorded by its popular Alexa voice assistant.

Separately, the company agreed to pay $5.8 million in customer refunds for alleged privacy violations involving its doorbell camera Ring.

The Alexa-related action orders Amazon to overhaul its data deletion practices and impose stricter, more transparent privacy measures. It also obliges the tech giant to delete certain data collected by its internet-connected digital assistant, which people use for everything from checking the weather to playing games and queueing up music.

“Amazon’s history of misleading parents, keeping children’s recordings indefinitely, and flouting parents’ deletion requests violated COPPA (the Child Online Privacy Protection Act) and sacrificed privacy for profits,” Samuel Levine, the FCT consumer protection chief, said in a statement. The 1998 law is designed to shield children from online harms.

FTC Commissioner Alvaro Bedoya said in a statement that “when parents asked Amazon to delete their kids’ Alexa voice data, the company did not delete all of it.”

The agency ordered the company to delete inactive child accounts as well as certain voice and geolocation data.

Amazon kept the kids’ data to refine its voice recognition algorithm, the artificial intelligence behind Alexa, which powers Echo and other smart speakers, Bedoya said. The FTC complaint sends a message to all tech companies who are “sprinting to do the same” amid fierce competition in developing AI datasets, he added.

“Nothing is more visceral to a parent than the sound of their child’s voice,” tweeted Bedoya, the father of two small children.

Amazon said last month that it has sold more than a half-billion Alexa-enabled devices globally and that use of the service increased 35% last year.

In the Ring case, the FTC says Amazon’s home security camera subsidiary let employees and contractors access consumers’ private videos and provided lax security practices that enabled hackers to take control of some accounts.

Amazon bought California-based Ring in 2018, and many of the violations alleged by the FTC predate the acquisition. Under the FTC’s order, Ring is required to pay $5.8 million that would be used for consumer refunds.

Amazon said it disagreed with the FTC’s claims on both Alexa and Ring and denied violating the law. But it said the settlements “put these matters behind us.”

“Our devices and services are built to protect customers’ privacy, and to provide customers with control over their experience,” the Seattle-based company said.

In addition to the fine in the Alexa case, the proposed order prohibits Amazon from using deleted geolocation and voice information to create or improve any data product. The order also requires Amazon to create a privacy program for its use of geolocation information.

The proposed orders must be approved by federal judges.

FTC commissioners had unanimously voted to file the charges against Amazon in both cases.

your ads here!