Coeco

Russia Fines Google $32,000 for Videos About Ukraine Conflict

A Russian court on Thursday imposed a $32,000 fine on Google for failing to delete allegedly false information about the conflict in Ukraine.

The move by a magistrate’s court follows similar actions in early August against Apple and the Wikimedia Foundation that hosts Wikipedia.

According to Russian news reports, the court found that the YouTube video service, which is owned by Google, was guilty of not deleting videos with incorrect information about the conflict — which Russia characterizes as a “special military operation.”

Google was also found guilty of not removing videos that suggested ways of gaining entry to facilities which are not open to minors, news agencies said, without specifying what kind of facilities were involved.

In Russia, a magistrate court typically handles administrative violations and low-level criminal cases.

Since sending troops into Ukraine in February 2022, Russia has enacted an array of measures to punish any criticism or questioning of the military campaign.

Some critics have received severe punishments. Opposition figure Vladimir Kara-Murza was sentenced this year to 25 years in prison for treason stemming from speeches he made against Russia’s actions in Ukraine.

your ads here!

Texas OKs Plan to Mandate Tesla Tech for EV Chargers in State

Texas on Wednesday approved its plan to require companies to include Tesla’s technology in electric vehicle charging stations to be eligible for federal funds, despite calls for more time to re-engineer and test the connectors.

The decision by Texas, the biggest recipient of a $5 billion program meant to electrify U.S. highways, is being closely watched by other states and is a step forward for Tesla CEO Elon Musk’s plans to make its technology the U.S. charging standard.

Tesla’s efforts are facing early tests as some states start rolling out the funds. The company won a slew of projects in Pennsylvania’s first round of funding announced on Monday but none in Ohio last month.

Federal rules require companies to offer the rival Combined Charging System, or CCS, a U.S. standard preferred by the Biden administration, as a minimum to be eligible for the funds.

But individual states can add their own requirements on top of CCS before distributing the federal funds at a local level.

Ford Motor and General Motors’ announcement about two months ago that they planned to adopt Tesla’s North American Charging Standard, or NACS, sent shockwaves through the industry and prompted a number of automakers and charging companies to embrace the technology.

In June, Reuters reported that Texas, which will receive and deploy $407.8 million over five years, planned to mandate companies to include Tesla’s plugs. Washington state has talked about similar plans, and Kentucky has mandated it.

Florida, another major recipient of funds, recently revised its plans, saying it would mandate NACS one year after standards body SAE International, which is reviewing the technology, formally recognizes it. 

Some charging companies wrote to the Texas Transportation Commission opposing the requirement in the first round of funds. They cited concerns about the supply chain and certification of Tesla’s connectors could put the successful deployment of EV chargers at risk.

That forced Texas to defer a vote on the plan twice as it sought to understand NACS and its implications, before the commission voted unanimously to approve the plan on Wednesday.

“The two-connector approach being proposed will help assure coverage of a minimum of 97% of the current, over 168,000 electric vehicles with fast charge ports in the state,” Humberto Gonzalez, a director at Texas’ department of transportation, said while presenting the state’s plan to the commissioners.

your ads here!

Musk’s X Delays Access to Content on Reuters, NY Times, Social Media Rivals

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, The Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on Aug. 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

your ads here!

In Seattle, VP Harris Touts Administration Efforts to Boost Clean Energy

Vice President Kamala Harris marked the one-year anniversary of the Inflation Reduction Act by touting the Biden administration’s commitment to mitigating the climate crisis. Natasha Mozgovaya reports from Seattle.

your ads here!

Google to Train 20,000 Nigerians in Digital Skills

Google plans to train 20,000 Nigerian women and youth in digital skills and provide a grant of $1.6 million to help the government create 1 million digital jobs in the country, its Africa executives said on Tuesday. 

Nigeria plans to create digital jobs for its teeming youth population, Vice President Kashim Shettima told Google Africa executives during a meeting in Abuja. Shettima did not provide a timeline for creating the jobs. 

Google Africa executives said a grant from its philanthropic arm in partnership with Data Science Nigeria and the Creative Industry Initiative for Africa will facilitate the program. 

Shettima said Google’s initiative aligned with the government’s commitment to increase youth participation in the digital economy. The government is also working with the country’s banks on the project, Shettima added. 

Google director for West Africa Olumide Balogun said the company would commit funds and provide digital skills to women and young people in Nigeria and also enable startups to grow, which will create jobs. 

Google is committed to investing in digital infrastructure across Africa, Charles Murito, Google Africa’s director of government relations and public policy, said during the meeting, adding that digital transformation can be a job enabler. 

your ads here!

Fiction Writers Fear Rise of AI, Yet See It as a Story

For a vast number of book writers, artificial intelligence is a threat to their livelihood and the very idea of creativity. More than 10,000 of them endorsed an open letter from the Authors Guild this summer, urging AI companies not to use copyrighted work without permission or compensation.

At the same time, AI is a story to tell, and no longer just science fiction.

As present in the imagination as politics, the pandemic, or climate change, AI has become part of the narrative for a growing number of novelists and short story writers who only need to follow the news to imagine a world upended.

“I’m frightened by artificial intelligence, but also fascinated by it. There’s a hope for divine understanding, for the accumulation of all knowledge, but at the same time there’s an inherent terror in being replaced by non-human intelligence,” said Helen Phillips, whose upcoming novel “Hum” tells of a wife and mother who loses her job to AI.

“We’ve been seeing more and more about AI in book proposals,” said Ryan Doherty, vice president and editorial director at Celadon Books, which recently signed Fred Lunzker’s novel “Sike,” featuring an AI psychiatrist.

“It’s the zeitgeist right now. And whatever is in the cultural zeitgeist seeps into fiction,” Doherty said. 

Other AI-themed novels expected in the next two years include Sean Michaels’ “Do You Remember Being Born?” — in which a poet agrees to collaborate with an AI poetry company; Bryan Van Dyke’s “In Our Likeness,” about a bureaucrat and a fact-checking program with the power to change facts; and A.E. Osworth’s “Awakened,” about a gay witch and her titanic clash with AI.

Crime writer Jeffrey Diger, known for his thrillers set in contemporary Greece, is working on a novel touching upon AI and the metaverse, the outgrowth of being “continually on the lookout for what’s percolating on the edge of societal change,” he said.

Authors are invoking AI to address the most human questions.

In Sierra Greer’s “Annie Bot,” the title name is an AI mate designed for a human male. For Greer, the novel was a way to explore her character’s “urgent desire to please,” adding that a robot girlfriend enabled her “to explore desire, respect, and longing in ways that felt very new and strange to me.”

Amy Shearn’s “Animal Instinct” has its origins in the pandemic and in her personal life; she was recently divorced and had begun using dating apps.

“It’s so weird how, with apps, you start to feel as if you’re going person-shopping,” she said. “And I thought, wouldn’t it be great if you could really pick and choose the best parts of all these people you encounter and sort of cobble them together to make your ideal person?”

“Of course,” she added, “I don’t think anyone actually knows what their ideal person is, because so much of what draws us to mates is the unexpected, the ways in which people surprise us. That said, it seemed like an interesting premise for a novel.”

Some authors aren’t just writing about AI, but openly working with it.

Earlier this year, journalist Stephen Marche used AI to write the novella “Death of An Author,” for which he drew upon everyone from Raymond Chandler to Haruki Murakami. Screenwriter and humorist Simon Rich collaborated with Brent Katz and Josh Morgenthau for “I Am Code,” a thriller in verse that came out this month and was generated by the AI program “code-davinci-002.” (Filmmaker Werner Herzog reads the audiobook edition). 

Osworth, who is trans, wanted to address comments by “Harry Potter” author J.K. Rowling that have offended many in the trans community, and to wrest from her the power of magic. At the same time, they worried the fictional AI in their book sounded too human, and decided AI should speak for AI.

Osworth devised a crude program, based on the writings of Machiavelli among others, that would turn out a more mechanical kind of voice.

“I like to say that CHATgpt is a Ferrari, while what I came up with is a skateboard with one square wheel. But I was much more interested in the skateboard with one square wheel,” they said.

Michaels centers his new novel on a poet named Marian, in homage to poet Marianne Moore, and an AI program called Charlotte. He said the novel is about parenthood, labor, community, and “this technology’s implications for art, language and our sense of identity.”

Believing the spirit of “Do You Remember Being Born?” called for the presence of actual AI text, he devised a program that would generate prose and poetry, and uses an alternate format in the novel so readers know when he’s using AI.

In one passage, Marian is reviewing some of her collaboration with Charlotte.

“The preceding day’s work was a collection of glass cathedrals. I reread it with alarm. Turns of phrase I had mistaken for beautiful, which I now found unintelligible,” Michaels writes. “Charlotte had simply surprised me: I would propose a line, a portion of a line, and what the system spat back upended my expectations. I had been seduced by this surprise.”

And now AI speaks: “I had mistaken a fit of algorithmic exuberance for the truth.”

your ads here!

Chinese Surveillance Firm Selling Cameras With ‘Skin Color Analytics’

IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns. 

In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin. 

Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.”  VOA Mandarin verified this on Dahua’s Chinese website. 

The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.  

Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage. 

“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.  

VOA Mandarin contacted Dahua for comment but did not receive a response. 

The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.

‘Skin color is a basic feature’

Dahua said its skin tone analysis capability was an essential function in surveillance technology.  

 In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”  

IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products. 

In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.  

“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.  

Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.   

“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.

“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”  

Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.” 

However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.  

IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life. 

The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security. 

The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.  

Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.  

‘No place in a liberal democracy’

On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.  

“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.  

Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.” 

“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.  

In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations. 

The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.  

The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.  

Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.  

Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored. 

Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”  

Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.  

She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”  

She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”  

your ads here!

China to Require all Apps to Share Business Details in New Oversight Push

China will require all mobile app providers in the country to file business details with the government, its information ministry said, marking Beijing’s latest effort to keep the industry on a tight leash. 

The Ministry of Industry and Information Technology (MIIT) said late on Tuesday that apps without proper filings will be punished after the grace period that will end in March next year, a move that experts say would potentially restrict the number of apps and hit small developers hard. 

You Yunting, a lawyer with Shanghai-based DeBund Law Offices, said the order is effectively requiring approvals from the ministry. The new rule is primarily aimed at combating online fraud but it will impact all apps in China, he said. 

Rich Bishop, co-founder of app publishing firm AppInChina, said the new rule is also likely to affect foreign-based developers which have been able to publish their apps easily through Apple’s App Store without showing any documentation to the Chinese government. 

Bishop said that in order to comply with the new rules, app developers now must either have a company in China or work with a local publisher.  

Apple did not immediately reply to a request for comment. 

The iPhone maker pulled over a hundred artificial intelligence (AI) apps from its App Store last week to comply with regulations after China introduced a new licensing regime for generative AI apps for the country.  

The ministry’s notice also said entities “engaged in internet information services through apps in such fields as news, publishing, education, film and television and religion should also submit relevant documents.” 

The requirement could affect the availability of popular social media apps such as X, Facebook and Instagram. Use of such apps are not allowed in China, but they can be still downloaded from app stores, enabling Chinese to use them when traveling overseas. 

China already requires mobile games to obtain licenses before they launch in the country, and it had purged tens of thousands of unlicensed games from various app stores in 2020. 

Tencent’s WeChat, China’s most popular online social platform, said on Wednesday that mini apps, apps that can be opened within WeChat, must also follow the new rules. 

The company said that new apps must complete the filing before launch starting from September, while exiting mini apps have until the end of March.  

 

your ads here!

US Launches Contest to Use AI to Prevent Government System Hacks

The White House on Wednesday said it had launched a multimillion-dollar cyber contest to spur use of artificial intelligence to find and fix security flaws in U.S. government infrastructure, in the face of growing use of the technology by hackers for malicious purposes.  

“Cybersecurity is a race between offense and defense,” said Anne Neuberger, the U.S. government’s deputy national security adviser for cyber and emerging technology.

“We know malicious actors are already using AI to accelerate identifying vulnerabilities or build malicious software,” she added in a statement to Reuters.

Numerous U.S. organizations, from health care groups to manufacturing firms and government institutions, have been the target of hacking in recent years, and officials have warned of future threats, especially from foreign adversaries.  

Neuberger’s comments about AI echo those Canada’s cybersecurity chief Samy Khoury made last month. He said his agency had seen AI being used for everything from creating phishing emails and writing malicious computer code to spreading disinformation.

The two-year contest includes around $20 million in rewards and will be led by the Defense Advanced Research Projects Agency, the U.S. government body in charge of creating technologies for national security, the White House said.

Google, Anthropic, Microsoft, and OpenAI — the U.S. technology firms at the forefront of the AI revolution — will make their systems available for the challenge, the government said.

The contest signals official attempts to tackle an emerging threat that experts are still trying to fully grasp. In the past year, U.S. firms have launched a range of generative AI tools such as ChatGPT that allow users to create convincing videos, images, texts, and computer code. Chinese companies have launched similar models to catch up.

Experts say such tools could make it far easier to, for instance, conduct mass hacking campaigns or create fake profiles on social media to spread false information and propaganda.  

“Our goal with the DARPA AI challenge is to catalyze a larger community of cyber defenders who use the participating AI models to race faster – using generative AI to bolster our cyber defenses,” Neuberger said.

The Open Source Security Foundation (OpenSSF), a U.S. group of experts trying to improve open source software security, will be in charge of ensuring the “winning software code is put to use right away,” the U.S. government said. 

your ads here!

US to Restrict High-Tech Investment in China

U.S. President Joe Biden is planning Wednesday to impose restrictions on U.S. investments in some high-tech industries in China.

Biden’s expected executive order could again heighten tensions between the U.S., the world’s biggest economy, and No. 2 China after a period in which leaders of the two countries have held several discussions aimed at airing their differences and seeking common ground.

The new restrictions would limit U.S. investments in such high-tech sectors in China as quantum computing, artificial intelligence and advanced semi-conductors, but apparently not in the broader Chinese economy, which recently has been struggling to advance.

In a trip to China in July, Treasury Secretary Janet Yellen told Chinese Premier Li Qiang, “The United States will, in certain circumstances, need to pursue targeted actions to protect its national security. And we may disagree in these instances.”

Trying to protect its own security interests in the Indo-Pacific region and across the globe, National Security Adviser Jake Sullivan said in April that the U.S. has implemented “carefully tailored restrictions on the most advanced semiconductor technology exports” to China.

“Those restrictions are premised on straightforward national security concerns,” he said. “Key allies and partners have followed suit, consistent with their own security concerns.”

Sullivan said they are not, as Beijing has claimed, a ‘technology blockade.’”

your ads here!

Zoom, Symbol of Remote Work Revolution, Wants Workers Back in Office Part-time

The company whose name became synonymous with remote work is joining the growing return-to-office trend.

Zoom, the video conferencing pioneer, is asking employees who live within a 50-mile radius of its offices to work onsite two days a week, a company spokesperson confirmed in an email. The statement said the company has decided that “a structured hybrid approach – meaning employees that live near an office need to be onsite two days a week to interact with their teams – is most effective for Zoom.”

The new policy, which will be rolled out in August and September, was first reported by the New York Times, which said Zoom CEO Eric Yuan fielded questions from employees unhappy with the new policy during a Zoom meeting last week.

Zoom, based in San Jose, California, saw explosive growth during the first year of the COVID-19 pandemic as companies scrambled to shift to remote work, and even families and friends turned to the platform for virtual gatherings. But that growth has stagnated as the pandemic threat has ebbed.

Shares of Zoom Video Communications Inc. have tumbled hard since peaking early in the pandemic, from $559 apiece in October 2020, to below $70 on Tuesday. Shares have slumped more than 10% to start the month of August. In February, Zoom laid off about 1,300 people, or about 15% of its workforce.

Google, Salesforce and Amazon are among major companies that have also stepped up their return-to-office policies despite a backlash from some employees.

Similarly to Zoom, many companies are asking their employees to show up to the office only part time, as hybrid work shapes up to be a lasting legacy of the pandemic. Since January, the average weekly office occupancy rate in 10 major U.S. cities has hovered around 50%, dipping below that threshold during the summer months, according to Kastle Systems, which measures occupancy through entry swipes.

your ads here!

LogOn: Police Recruit AI to Analyze Police Body-Camera Footage

U.S. police reform advocates have long argued that police-worn body cameras will help reduce officers’ excessive use of force and work to build public trust. But the millions of hours of footage that so-called “body cams” generate are difficult for police supervisors to monitor. As Shelley Schlender explains, artificial intelligence may help.

your ads here!

Pope Warns Against Potential Dangers of Artificial Intelligence

Pope Francis on Tuesday called for a global reflection on the potential dangers of artificial intelligence (AI), noting the new technology’s “disruptive possibilities and ambivalent effects.”  

Francis, who is 86 and said in the past he does not know how to use a computer, issued the warning in a message for the next World Day of Peace of the Catholic Church, falling on New Year’s Day.  

The Vatican released the message well in advance, as it is customary.  

The pope “recalls the need to be vigilant and to work so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded,” it reads.  

“The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law,” it adds.  

Back in 2015, Francis acknowledged being “a disaster” with technology, but he has also called the internet, social networks and text messages “a gift of God,” provided that they are used wisely.  

In 2020, the Vatican joined forces with tech giants Microsoft MSFT.O and IBM IBM.N to promote the ethical development of AI and call for regulation of intrusive technologies such as facial recognition.

your ads here!

US Tech Groups Back TikTok in Challenge to Montana State Ban

Two to tech groups on Monday backed TikTok Inc in its lawsuit seeking block enforcement of a Montana state ban on use of the short video sharing app before it takes effect on January 1.

NetChoice, a national trade association that includes major tech platforms, and Chamber of Progress, a tech-industry coalition, said in a joint court filing that “Montana’s effort to cut Montanans off from the global network of TikTok users ignores and undermines the structure, design, and purpose of the internet.”

TikTok, which is owned by China’s ByteDance, filed a suit in May seeking to block the first-of-its-kind U.S. state ban on several grounds, arguing it violates the First Amendment free speech rights of the company and users.

your ads here!

Analysts Say Use of Spyware During Conflict Is Chilling

The use of sophisticated spyware to hack into the devices of journalists and human rights defenders during a period of conflict in Armenia has alarmed analysts.

A joint investigation by digital rights organizations, including Amnesty International, found evidence of the surveillance software on devices belonging to 12 people, including a former government spokesperson.

The apparent targeting took place between October 2020 and December 2022, including during key moments in the Nagorno-Karabakh conflict, Amnesty reported.

The region has been at the center of a decades-long dispute between Azerbaijan and Armenia, which have fought two wars over the mountainous territory.

Elina Castillo Jiménez, a digital surveillance researcher at Amnesty International’s Security Laboratory, told VOA that her organization’s research — published earlier this year — confirmed that at least a dozen public figures in Armenia were targeted, including a former spokesperson for the Ministry of Foreign Affairs and a representative of the United Nations.

Others had reported on the conflict, including for VOA’s sister network Radio Free Europe/Radio Liberty; provided analysis; had sensitive conversations related to the conflict; or in some cases worked for organizations known to be critical of the government, the researchers found.

“The conflict may have been one of the reasons for the targeting,” Castillo said.

If, as Amnesty and others suspect, the timing is connected to the conflict, it would mark the first documented use of Pegasus in the context of an international conflict.

Researchers have found previously that Pegasus was used extensively in Azerbaijan to target civil society representatives, opposition figures and journalists, including the award-winning investigative reporter Khadija Ismayilova.

VOA reached out via email to the embassies of Armenia and Azerbaijan in Washington for comment but as of publication had not received a response.

Pegasus is a spyware marketed to governments by the Israeli digital security company NSO Group. The global investigative collaboration, The Pegasus Project, has been tracking the spyware’s use against human rights defenders, critics and others.

Since 2021, the U.S government has imposed measures on NSO over the hacking revelations, saying its tools were used for “transnational repression.” U.S actions include export limits on NSO Group and a March 2023 executive order that restricts the U.S. government’s use of commercial spyware like Pegasus.

VOA reached out to the NSO Group for comment but as of publication had not received a response.

Castillo said that Pegasus has the capability to infiltrate both iOS and Android phones.

Pegasus spyware is a “zero-click” mobile surveillance program. It can attack devices without any interaction from the individual who is targeted, gaining complete control over a phone or laptop and in effect transforming it into a spying tool against its owner, she said.

“The way that Pegasus operates is that it is capable of using elements within your iPhones or Androids,” said Castillo. “Imagine that it embed(s) something in your phone, and through that, then it can take control over it.”

The implications of the spyware are not lost on Ruben Melikyan. The lawyer, based in Armenia’s capital, Yerevan, is among those whose devices were infected.

An outspoken government critic, Melikyan has represented a range of opposition parliamentarians and activists.

The lawyer said he has concerns that the software could have allowed hackers to gain access to his data and information related to his clients.

“As a lawyer, my phone contained confidential information, and its compromise made me uneasy, particularly regarding the protection of my current and former clients’ rights.” he said.

Melikyan told VOA that his phone had been targeted twice: in May 2021, when he was monitoring Armenian elections, and again during a tense period in the Armenia and Azerbaijan conflict in December 2022.

Castillo said she believes targeting individuals with Pegasus is a violation of “international humanitarian law” and that evidence shows it is “an absolute menace to people doing human rights work.”

She said the researchers are not able to confirm who commissioned the use of the spyware, but “we do believe that it is a government customer.”

When the findings were released this year, an NSO Group spokesperson said it was unable to comment but that earlier allegations of “improper use of our technologies” had led to the termination of contracts.

Amnesty International researchers are also investigating the potential use of a commercial spyware, Predator, which was found on Armenian servers.

“We have the evidence that suggests that it was used. However, further investigation is needed,” Castillo said, adding that their findings so far suggest that Pegasus is just “one of the threats against journalists and human rights defenders.”

This story originated in VOA’s Armenia Service.

your ads here!

US Mom Blames Face Recognition Technology for Flawed Arrest

A mother is suing the city of Detroit, saying unreliable facial recognition technology led to her being falsely arrested for carjacking while she was eight months pregnant. 

Porcha Woodruff was getting her two children ready for school the morning of February 16 when a half-dozen police officers showed up at her door to arrest her, taking her away in handcuffs, the 32-year-old Detroit woman said in a federal lawsuit.

“They presented her with an arrest warrant for robbery and carjacking, leaving her baffled and assuming it was a joke, given her visibly pregnant state,” her attorney wrote in a lawsuit accusing the city of false arrest. 

The suit, filed Thursday, argues that police relied on facial recognition technology that should not be trusted, given “inherent flaws and unreliability, particularly when attempting to identify Black individuals” such as Woodruff.

Some experts say facial recognition technology is more prone to error when analyzing the faces of people of color.

In a statement Sunday, the Wayne County prosecutor’s office said the warrant that led to Woodruff’s arrest was on solid ground, NBC News reported.

“The warrant was appropriate based upon the facts,” it said.

The case began in late January, when police investigating a reported carjacking by a gunman used imagery from a gas station’s security video to track down a woman believed to have been involved in the crime, according to the suit.

Facial recognition analysis from the video identified Woodruff as a possible match, the suit said.

Woodruff’s picture from a 2015 arrest was in a set of photos shown to the carjacking victim, who picked her out, according to the lawsuit.

Woodruff was freed on bond the day of her arrest and the charges against her were later dropped due to insufficient evidence, the civil complaint maintained. 

“This case highlights the significant flaws associated with using facial recognition technology to identify criminal suspects,” the suit argued.

Woodruff’s suit seeks unspecified financial damages plus legal fees. 

your ads here!

US Scientists Repeat Fusion Ignition Breakthrough

U.S. scientists have achieved net energy gain in a fusion reaction for the second time since December, the Lawrence Livermore National Laboratory said on Sunday.

Scientists at the California-based lab repeated the fusion ignition breakthrough in an experiment in the National Ignition Facility (NIF) on July 30 that produced a higher energy yield than in December, a Lawrence Livermore spokesperson said.

Final results are still being analyzed, the spokesperson added.

Lawrence Livermore achieved a net energy gain in a fusion experiment using lasers on Dec. 5, 2022. The scientists focused a laser on a target of fuel to fuse two light atoms into a denser one, releasing the energy.

That experiment briefly achieved what’s known as fusion ignition by generating 3.15 megajoules of energy output after the laser delivered 2.05 megajoules to the target, the Energy Department said.

In other words, it produced more energy from fusion than the laser energy used to drive it, the department said.

The Energy Department called it “a major scientific breakthrough decades in the making that will pave the way for advancements in national defense and the future of clean power.”

Scientists have known for about a century that fusion powers the sun and have pursued developing fusion on Earth for decades. Such a breakthrough could one day help curb climate change if companies can scale up the technology to a commercial level in the coming decades.

your ads here!

Musk Says Fight with Zuckerberg Will be Live-Streamed on X

Elon Musk said in a social media post that his proposed cage fight with Meta (META.O) CEO Mark Zuckerberg would be live-streamed on social media platform X, formerly known as Twitter. 

The social media moguls have been egging each other into a mixed martial arts cage match in Las Vegas since June.

“Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans,” Musk said in a post on X early on Sunday morning, without giving any further details.

Earlier on Sunday, Musk had said on X that he was “lifting weights throughout the day, preparing for the fight”, adding that he did not have time to work out so brings the weights to work.

When a user on X asked Musk the point of the fight, Musk responded by saying “It’s a civilized form of war. Men love war.”

Meta did not respond to a Reuters request for comment on Musk’s post. 

The brouhaha began when Musk said in a June 20 post that he was “up for a cage match” with Zuckerberg, who is trained in jiujitsu.

A day later, Zuckerberg, 39, who has posted pictures of matches he has won on his company’s Instagram platform, asked Musk, 51, to “send location” for the proposed throwdown, to which Musk replied “Vegas Octagon”, referring to an events center where mixed martial arts (MMA) championship bouts are held.

Musk then said he would start training if the cage fight took shape. 

your ads here!

Australian Lawmakers Highlight Social Media’s Threat to National Security

A parliamentary committee investigating foreign interference in Australia has found that Chinese apps TikTok and WeChat could present major security risks.

In April, Australia said it would ban TikTok on government devices because of security fears. 

Lawmakers in Australia have sounded the alarm about the nefarious rise of social media and its power to spread disinformation and undermine trust. 

The Senate Select Committee on Foreign Interference through Social Media said that foreign interference was Australia’s most pressing national security threat. The parliamentary inquiry in Canberra found that the increased use of social media, including Chinese-owned apps TikTok and WeChat, could “corrupt our decision-making, political discourse and societal norms.”   

The report stated that “the Chinese government can require these social media companies to secretly cooperate with Chinese intelligence agencies.” 

Committee makes recommendations

The committee in Canberra has made 17 recommendations, including extending an April 2023 ban on TikTok on Australian government issued devices to include WeChat, with the threat of fines and nationwide bans if the apps breach transparency guidelines.   

Senator James Paterson is the head of the committee as well as Shadow Cyber Security Minister. He told the Australian Broadcasting Corp. Wednesday that the apps were guilty of spreading disinformation.  

“It is absolutely rife and it is occurring on all social media platforms,” said Paterson. “It is absolutely critical that any social media platform operating in Australia of any scale is able to be subject to Australian laws and regulation, and the oversight of our regulatory agencies and our parliament.”   

The Canberra government said it was considering all the committee’s recommendations. A government spokesperson asserted that foreign governments have used social media to harass diaspora and spread disinformation.  

TikTok responds

In a statement, TikTok said that while it disagreed with the way it had been characterized by the parliamentary inquiry, it welcomed the committee’s decision to not recommend an outright ban.   

It added that TikTok remained “committed to continuing an open and transparent dialogue with all levels of Australian government.” 

There has been no comment, so far, from WeChat.   

Meta, which owns Facebook, had previously told the inquiry that it had removed more than 200 foreign interference operations since 2017.  The U.S. company has warned that the internet’s democratic principles were increasingly being challenged by “strong forces.” 

your ads here!

Amazon Adds US-Wide Video Telemedicine Visits to Its Virtual Clinic

Amazon is adding video telemedicine visits in all 50 states to a virtual clinic it launched last fall, as the e-commerce giant pushes deeper into care delivery.

Amazon said Tuesday that customers can visit its virtual clinic around the clock through Amazon’s website or app. There, they can compare prices and response times before picking a telemedicine provider from several options.

The clinic, which doesn’t accept insurance, launched last fall with a focus on text message-based consultations. Those remain available in 34 states.

Virtual care, or telemedicine, exploded in popularity during the COVID-19 pandemic. It has remained popular as a convenient way to check in with a doctor or deal with relatively minor health issues like pink eye.

Amazon says its clinic offers care for more than 30 common health conditions. Those include sinus infections, acne, COVID-19 and acid reflux. The clinic also offers treatments for motion sickness, seasonal allergies and several sexual health conditions, including erectile dysfunction.

It also provides birth control and emergency contraception.

Chief Medical Officer Dr. Nworah Ayogu said in a blog post that the clinic aims to remove barriers to help people treat “everyday health concerns.”

“As a doctor, I’ve seen firsthand that patients want to be healthy but lack the time, tools, or resources to effectively manage their care,” Ayogu wrote.

Amazon said messaging-based consultations cost $35 on average while video visits cost $75.

That’s cheaper than the cost of many in-person visits with a doctor, which can run over $100 for people without insurance or coverage that makes them pay a high deductible.

While virtual visits can improve access to help, some doctors worry that they also lead to care fragmentation and can make it harder to track a patient’s overall health. That could happen if a patient has a regular doctor who doesn’t learn about the virtual visit from another provider.

In addition to virtual care, Amazon also sells prescription drugs through its Amazon Pharmacy business and has been building its presence with in-patient care.

Earlier this year, Amazon also closed a $3.9 billion acquisition of the membership-based primary care provider One Medical, which had about 815,000 customers and 214 medical offices in more than 20 markets.

One Medical offers both in-person care and virtual visits.

Anti-monopoly groups had called on the Federal Trade Commission to block the deal, arguing it would endanger patient privacy and help make the retailer more dominant in the marketplace. The agency didn’t block the deal but said it won’t rule out future challenges.

That deal was the first acquisition made under Amazon CEO Andy Jassy, who took over from founder Jeff Bezos in 2021. Jassy sees health care as a growth opportunity for the company.

your ads here!

Meta to Ask EU Users’ Consent to Share Data for Targeted Ads

Social media giant Meta on Tuesday said it intends to ask European Union-based users to give their consent before allowing targeted advertising on its networks including Facebook, bowing to pressure from European regulators.

It said the changes were to address “evolving and emerging regulatory requirements” amid a bruising tussle with the Irish Data Protection Commission that oversees EU data rules in Ireland, out of which Meta runs its European operations.

European regulators in January had dismissed the previous legal basis — “legitimate interest” — Meta had used to justify gathering users’ personal data for targeted advertising.

Currently, users joining Facebook and Instagram by default have that permission turned on, feeding their data to Meta so it can generate billions of dollars from such ads.

“Today, we are announcing our intention to change the legal basis that we use to process certain data for behavioral advertising for people in the EU, EEA [European Economic Area] and Switzerland from ‘Legitimate Interests’ to ‘Consent’,” Meta said in a blog post.

Meta added it will share more information in the months ahead as it continues to “constructively engage” with regulators.

“There is no immediate impact to our services in the region. Once this change is in place, advertisers will still be able to run personalized advertising campaigns to reach potential customers and grow their businesses,” it said.

Meta and other U.S. Big Tech companies have been hit by massive fines over their business practices in the EU in recent years and have been impacted by the need to comply with the bloc’s strict data privacy regulations.

Further effects are expected from the EU’s landmark Digital Markets Act, which bans anti-competitive behavior by the so-called “gatekeepers” of the internet.

your ads here!

LogOn: Deepfakes Are Making It Hard to Know What’s Real in Political Ads

The commission that enforces U.S. election rules will not be regulating AI-generated deepfakes in political advertising ahead of the 2024 presidential election. Deana Mitchell has our story.

your ads here!