Coeco

US Announces Charges Related to Efforts by Russia, China, Iran to Steal Technology

U.S. law enforcement officials on Tuesday announced a series of criminal cases exposing the relentless efforts by Russia, China and Iran to steal sensitive U.S. technologies.  

The five cases, which spanned a wide range of protected U.S. technologies, were brought by a new “strike force” created earlier this year to deter foreign adversaries from obtaining advanced U.S. innovation.

“These charges demonstrate the Justice Department’s commitment to preventing sensitive technology from falling into the hands of foreign adversaries, including Russia, China, and Iran,” said Assistant Attorney General Matthew Olsen, who leads the Justice Department’s National Security Division, and co-heads the task force.

Some of the cases announced on Tuesday go back several years but Olsen said the “threat is as significant as ever.”

Two of the cases involve Russia.

In New York, prosecutors charged a Russian national with smuggling U.S. military and dual-use technologies, including advanced electronics and testing equipment, to Russia through the Netherlands and France.  Nikolaos “Nikos” Bogonikolos was arrested last week in France and prosecutors said they’ll seek his extradition.

In a second case, two other Russian nationals – Oleg Sergeyevich Patsulya and Vasilii Sergeyevich Besedin – were arrested in Arizona on May 11 in connection with illegally shipping civilian aircraft parts from the United States to Russian airlines.

Patsulya and Besedin, both residents of Florida, allegedly used their U.S.-based limited liability company to purchase and send the parts, according to court documents.

The three other cases center on China and Iran.

In New York, prosecutors charged a Chinese national for conspiring to provide materials to Iran’s ballistic missile program.

Xiangjiang Qiao, an employee of a Chinese sanctioned company for its role in the proliferation of weapons of mass destruction, allegedly conspired to furnish isostatic graphite, a material used in the production of Intercontinental Ballistic Missiles, to Iran.

Liming Li, a California resident, was arrested on May 6 on charges of stealing “smart manufacturing” technologies from two companies he worked at and providing them to businesses in China.

Li allegedly offered to help Chinese companies build “their own capabilities,” a federal prosecutor said.

He was arrested at Ontario International Airport after arriving on a flight from Taiwan and has since been in federal custody, the Justice Department said.

The fifth case announced on Tuesday dates back to 2018 and accuses a former Apple  software engineer with stealing the company’s proprietary research on autonomous systems, including self-driving cars. The defendant took a flight to China on the day the FBI searched his house.

The charges and arrests stem from the work of the Disruptive Technology Strike Force, a joint effort between the departments of justice and transportation.

The initiative, announced in February, leverages the expertise of the FBI, Homeland Security Investigations (HSI) and 14 U.S. attorney’s offices.

Olsen said the cases brought by strike force “demonstrate the breadth and complexity of the threats we face, as well as what is at stake.”

“And they show our ability to accelerate investigations and surge our collective resources to defend against these threats,” Olsen said at a press conference.

your ads here!

ChatGPT’s Chief Testifies Before US Congress as Concerns Grow About AI Risks

The head of the artificial intelligence company that makes ChatGPT told U.S. Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.

Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach.

“This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.

your ads here!

STEM Courses in Rural Kenya Open Doors for Girls With Disabilities

Studying science, technology, engineering, and math — or STEM — can be a challenge for girls in rural Africa, especially those with disabilities. In Kenya, an aid group called The Action Foundation is helping to change that by providing remote STEM courses for girls with hearing, visual and physical impairments. Ahmed Hussein reports from Wajir County, Kenya. Camera: Ahmed Hussein

your ads here!

Bolivian EV Startup Hopes Tiny Car Will Make It Big in Lithium-Rich Country

On a recent, cold morning, Dr. Carlos Ortuño hopped into a tiny electric car to go check on a patient in the outskirts of Bolivia’s capital of La Paz, unsure if the vehicle would be able to handle the steep, winding streets of the high-altitude city. 

“I thought that because of the city’s topography it was going to struggle, but it’s a great climber,” said Ortuño about his experience driving a Quantum, the first EV to have ever been made in Bolivia. “The difference from a gasoline-powered vehicle is huge.” 

Ortuño’s home visit aboard a car the size of a golf cart was part of a government-sponsored program that brings doctors to patients living in neighborhoods far from the city center. The “Doctor in your house” program was launched last month by the municipality of La Paz using a fleet of six EV’s manufactured by Quantum Motors, the country’s sole producer of electric cars. 

“It is a pioneering idea. It helps protect the health of those in need, while protecting the environment and supporting local production,” La Paz Mayor Iván Arias said. 

The program could also help boost Quantum Motors, a company launched four years ago by a group of entrepreneurs who believe EVs will transform the auto industry in Bolivia, a lithium-rich country, where cheap, subsidized imported gasoline is still the norm. 

Built like a box, the Quantum moves at no more than 35 mph (56 kph), can be recharged from a household outlet and can travel 50 miles (80 kilometers) before a recharge. Its creators hope the $7,600 car will help revive dreams of a lithium-powered economy and make electric cars something the masses will embrace. 

“E-mobility will prevail worldwide in the next few years, but it will be different in different countries,” says José Carlos Márquez, general manager of Quantum Motors. “Tesla will be a dominant player in the U.S., with its speedy, autonomous cars. But in Latin America, cars will be more compact, because our streets are more similar to those of Bombay and New Delhi than to those of California.” 

But the company’s quest to boost e-mobility in the South American country has been challenging. In the four years since it released its first EVs, Quantum Motors has sold barely 350 cars in Bolivia and an undisclosed number of units in Peru and Paraguay. The company is also set to open a factory in Mexico later this year, although no further details have been provided on the scope of production there. 

Still, Quantum Motors’ bet on battery-powered cars makes sense when it comes to Bolivia’s resources. With an estimated 21 million tons, Bolivia has the world’s largest reserve of lithium, a key component in electric batteries, but it has yet to extract — and industrialize — its vast resources of the metal. 

In the meantime, the large majority of vehicles in circulation are still powered by fossil fuels and the government continues to pour millions of dollars subsidizing imported fuel than then sells at half the price to the domestic market. 

“The Quantum (car) might be cheap, but I don’t think it has the capacity of a gasoline-powered car,” says Marco Antonio Rodriguez, a car mechanic in La Paz, although he acknowledges people might change their mind once the government puts an end to gasoline subsidies. 

Despite the challenges ahead, the makers of the Quantum car are hopeful that programs like “Médico en tu casa,” which is scheduled to double in size and extend to other neighborhoods next year, will help boost production and churn out more EV’s across the region. 

“We are ready to grow,” said Márquez. “Our inventory has been sold out through July.” 

your ads here!

AI Presents Political Peril for 2024 With Threat to Mislead Voters

Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election. 

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away. 

No more. 

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low. 

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen. 

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.” 

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence. 

Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race. 

“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.” 

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool. 

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?” 

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic. 

“An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” reads the ad’s description from the RNC. 

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust. 

“What happens if an international entity — a cybercriminal or a nation state — impersonates someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.” 

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries. 

AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin. 

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact. 

Some states have offered their own proposals for addressing concerns about deepfakes. 

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other. 

“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.” 

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them “a deception” with “no place in legitimate, ethical campaigns.” 

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too. 

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward. 

Nellis’ newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails — all typically tedious tasks on campaigns. 

“The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” he said. 

your ads here!

As Net Tightens, Iranians Pushed to Take Up Homegrown Apps

Banned from using popular Western apps, Iranians have been left with little choice but to take up state-backed alternatives, as the authorities tighten internet restrictions for security reasons following months of protests.

Iranians are accustomed to using virtual private networks, or VPNs, to evade restrictions and access prohibited websites or apps, including the U.S.-based Facebook, Twitter and YouTube.

The authorities went as far as imposing total internet blackouts during the protests that erupted after the September death of 22-year-old Mahsa Amini, following her arrest for an alleged breach of the Islamic republic’s dress code for women.

Connections are back up and running again, and even those who are tech-savvy are being corralled into using the apps approved by the authorities such as Neshan for navigation and Snapp! to hail a car ride.

As many as 89 million people have signed up to Iranian messaging apps including Bale, Ita, Rubika and Soroush, the government says, but not everyone is keen on making the switch.

“The topics that I follow and the friends who I communicate with are not on Iranian platforms,” said Mansour Roghani, a resident in the capital Tehran.

“I use Telegram and WhatsApp and, if my VPN still allows me, I’ll check Instagram,” the former municipality employee said, adding that he has not installed domestic apps as replacements.

Integration

At the height of the deadly Amini protests in October, the Iranian government cited security concerns as it moved to restrict internet access and added Instagram and WhatsApp to its long list of blocked applications.

“No one wants to limit the internet and we can have international platforms” if the foreign companies agree to introduce representative offices in Iran, Telecommunications Minister Issa Zarepour said last month.

Meta, the American giant that owns Facebook, Instagram and WhatsApp, has said it has no intention of setting up offices in the Islamic republic, which remains under crippling U.S. sanctions.

The popularity of the state-sanctioned apps may not be what it seems, however, with the government encouraging people to install them by shifting essential online public services to the homegrown platforms which are often funded by the state.

In addition, analysts say, Iranian users have online safety concerns when using the approved local apps.

“We have to understand they have needs,” said Amir Rashidi, director of digital rights and security at the New York-based Miaan Group.

“As an Iranian citizen, what would you do if registering for university is only based on one of these apps? Or what would you do if you need access to government services?” he said.

The locally developed apps lack a “clear privacy policy,” according to software developer Keikhosrow Heydari-Nejat.

“I have installed some of the domestic messaging apps on a separate phone, not the one that I am using every day,” the 23-year-old said, adding he had done so to access online government services.

“If they (government) shut the internet down, I will keep them installed but I will visit my friends in person,” he said.

Interconnection 

In a further effort to push people onto the domestic platforms, the telecommunications ministry connected the four major messaging apps, enabling users to communicate across the platforms.

“Because the government is going for the maximum number of users, they are trying to connect these apps,” the analyst Rashidi said, adding all the domestic platforms “will enjoy financial and technical support.”

Iran has placed restrictions on apps such as Facebook and Twitter since 2009, following protests over disputed presidential elections.

In November 2019, Iran imposed nationwide internet restrictions during protests sparked by surprise fuel price hikes.

A homegrown internet network, the National Information Network (NIN), which is around 60% completed, will allow domestic platforms to operate independently of global networks.

One platform already benefiting from the highly filtered domestic network is Snapp!, an app similar to U.S. ride-hailing service Uber that has 52 million users — more than half the country’s population.

But Rashidi said the NIN will give Tehran greater control to “shut down the internet with less cost” once completed.

your ads here!

Off-Grid Solar Brings Light, Time, Income to Remotest Indonesia Villages

As Tamar Ana Jawa wove a red sarong in the fading sunlight, her neighbor switched on a light bulb dangling from the sloping tin roof. It was just one bulb powered by a small solar panel, but in this remote village that means a lot. In some of the world’s most remote places, off-grid solar systems are bringing villagers like Jawa more hours in the day, more money and more social gatherings.

Before electricity came to the village, a little less than two years ago, the day ended when the sun went down. Villagers in Laindeha, on the island of Sumba in eastern Indonesia, would set aside the mats they were weaving or coffee they were sorting to sell at the market as the light faded.

A few families who could afford them would start noisy generators that rumbled into the night, emitting plumes of smoke. Some people wired lightbulbs to old car batteries, which would quickly die or burn out appliances, as they had no regulator. Children sometimes studied by makeshift oil lamps, but these occasionally burned down homes when knocked over by the wind.

That’s changed since grassroots social enterprise projects have brought small, individual solar panel systems to Laindeha and villages like it across the island.

For Jawa, it means much-needed extra income. When her husband died of a stroke in December 2022, Jawa wasn’t sure how she would pay for her children’s schooling. But when a neighbor got electric lighting shortly after, she realized she could continue weaving clothes for the market late into the evening.

“It used to be dark at night, now it’s bright until morning,” the 30-year-old mother of two said, carefully arranging and pushing red threads at the loom. “So tonight, I work … to pay for the children.”

Around the world, hundreds of millions of people live in communities without regular access to power, and off-grid solar systems like these are bringing limited access to electricity to places like these years before power grids reach them.  

Some 775 million people globally lacked access to electricity in 2022, according to the International Energy Agency. Sub-Saharan Africa and South Asia are home to some of the largest populations without access to electricity. Not having electricity at home keeps people in poverty, the U.N. and World Bank wrote in a 2021 report. It’s hard for very poor people to get electricity, according to the report, and it’s hard for people who don’t have it to participate in the modern economy.

Indonesia has brought electricity to millions of people in recent years, going from 85% to nearly 97% coverage between 2005 and 2020, according to World Bank data. But there are still more than half a million people in Indonesia living in places the grid doesn’t reach.

While barriers still remain, experts say off-grid solar programs on the island could be replicated across the vast archipelago nation, bringing renewable energy to remote communities.

Now, villagers frequently gather in the evening to continue the day’s work, gather to watch television shows on cellphones charged by the panels and help children do homework in light bright enough to read.

“I couldn’t really study at night before,” said Antonius Pekambani, a 17-year-old student in Ndapaymi village, east Sumba. “But now I can.”

Solar power is still fairly rare in Indonesia. While the country has targeted more solar as part of its climate goals, there has been limited progress due to regulations that don’t allow households to sell power back to the grid, ruling out a way of defraying the cost that has helped people afford solar in other parts of the world.

That’s where grassroots organizations like Sumba Sustainable Solutions, based in eastern Sumba since 2019, saw potential to help. Working with international donors to help subsidize the cost, it provides imported home solar systems, which can power light bulbs and charge cellphones, for monthly payments equivalent to $3.50 over three years.

The organization also offers solar-powered appliances such as wireless lamps and grinding machines. It said it has distributed over 3,020 solar light systems and 62 mills across the island, reaching more than 3,000 homes.

Imelda Pindi Mbitu, a 46-year-old mother of five living in Walatungga, said she used to spend whole days grinding corn kernels and coffee beans between two rocks to sell at the local market; now, she takes it to a solar-powered mill shared by the village.

“With manual milling, if I start in the morning I can only finish in the afternoon. I can’t do anything else,” she said sitting in her wooden home. “If you use the machine, it’s faster. So now I can do other things.”

Similar schemes in other places, including Bangladesh and sub-Saharan Africa, have helped provide electricity for millions, according to the World Bank.

But some smaller off-grid solar systems like these don’t provide the same amount of power as grid access. While cellphones, light bulbs and mills remain charged, the systems don’t generate enough power for a large sound system or a church.

Off-grid solar projects face hurdles too, said Jetty Arlenda, an engineer with Sumba Sustainable Solutions.

The organization’s scheme is heavily reliant upon donors to subsidize the cost of solar equipment, which many rural residents would be unable to afford at their market cost. Villagers without off-grid solar panels are stuck on waitlists while Sumba Sustainable Solutions looks for more funding. They’re hoping for support from Indonesia’s $20 billion Just Energy Transition Partnership deal, which is being negotiated by numerous developed nations and international financial institutions.

There’s also been issues with recipients failing to make payments, especially as the island deals with locust outbreaks diminishing crops and livelihoods of villagers. And when solar systems break, they need imported parts that can be hard to come by.

your ads here!

Elon Musk Names NBCUniversal’s Yaccarino as New Twitter CEO

Billionaire tech entrepreneur Elon Musk on Friday named NBCUniversal executive Linda Yaccarino as the chief executive officer of social media giant Twitter.

From his own Twitter account Friday, Musk wrote, “I am excited to welcome Linda Yaccarino as the new CEO of Twitter! (She) will focus primarily on business operations, while I focus on product design and new technology.” 

He said Yaccarino would transform Twitter, which is now called X Corp., into “an everything app” called X. 

On Thursday, Musk teased Yaccarino’s hiring, saying only “she” will start in six to eight weeks.  

Yaccarino worked in advertising and media sales for NBCUniversal since 2011 and as chairperson of global advertising since October 2020. The company announced her departure earlier in the day Friday.

Analysts say Yaccarino’s background could be key to Twitter’s future. Since Musk acquired Twitter last October, he has taken some controversial steps, such as loosening controls on the spread of false information and laying off nearly 80% of its staff, which prompted advertisers to flee.

No comment from Yaccarino on her hiring was immediately available.

Some information for this report was provided by The Associated Press and Reuters. 

your ads here!

Apple to Launch First Online Store in Vietnam

Apple will launch its first online store in Vietnam next week, the company said Friday, hoping to cash in on the country’s young and tech-savvy population.

The iPhone maker is among a host of global tech giants including Intel, Samsung and LG, that have chosen Vietnam for assembly of their products.

But up to now, the Silicon Valley giant has sold its products in Vietnam’s market of 100 million people via authorized resellers.

“We’re honored to be expanding in Vietnam,” said Deirdre O’Brien, Apple’s senior vice president of retail in an online statement in Vietnamese.

The country’s communist government says it wants 85 percent of its adult population to have access to a smartphone by 2025, up from the current 73 percent.

Less than a third of the country’s mobile users have an iPhone, according to market research platform Statista.

Through online stores, “clients in Vietnam can discover products and connect with our experienced experts,” O’Brien said in the statement.

The production of accessories and assembly of mobile phones account for up to 70 percent of electronics manufacturing in Vietnam. Products are mainly for export.

Official figures said Vietnam’s mobile phone production industry reported an import-export turnover of U.S. $114 billion last year, a third of the country’s total import-export revenue.

your ads here!

Will Artificial Intelligence Take Away Jobs? Not Many for Now, Says Expert

The growing abilities of artificial intelligence have left many observers wondering how AI will impact people’s jobs and livelihoods. One expert in the field predicts it won’t have much effect, at least in the short term.  

The topic was a point of discussion at the annual TED conference held recently in Vancouver.   

In a world where students’ term papers can now be written by artificial intelligence, paintings can be drawn by merely uttering words and an AI-generated version of your favorite celebrity can appear on screen, the impact of this new technology is starting to be felt in societies and sparking both wonderment and concern.  

While artificial intelligence has yet to become pervasive in everyday life, the rumblings of what could be a looming economic earthquake are growing stronger.  

 

Gary Marcus is a professor emeritus of psychology and neural science at New York University who helped ride sharing company Uber adopt the rapidly developing technology. 

 

An author and host of the podcast “Humans versus Machines,” Marcus says AI’s economic impact is limited for now, although some jobs have already been threatened by the technology, such as commercial animators for electronic gaming. 

Speaking with VOA after a recent conference for TED, the non-profit devoting to spreading ideas, Marcus said jobs that require manual labor will be safe, for now.   

“We’re not going to see blue collar jobs replaced I think as quickly as some people had talked about.,” Marcus predicted. “So we still don’t have driverless cars, even though people have talked about that for years. Anybody that does something with their hands is probably safe right now. Because we don’t really know how to make robots that sophisticated when it comes to dealing with the real world.”          

Another TED speaker, Sal Khan, is the founder of Khanmigo, an artificial intelligence powered software designed to help educate children. He is optimistic about AI’s potential economic impact as a driver of wealth creation. 

“Will it cause mass dislocations in the job market? I actually don’t know the answer to that,” Khan said, adding that “It will create more wealth, more productivity.” 

The legal profession could be boosted by AI if the technology prompts litigation. Copyright attorneys could especially benefit. 

 

Tom Graham and his company, Metaphysic.ai, artificially recreate famous actors and athletes so they do not need to physically be in front of a camera or microphone in order to appear in films, TV shows or commercials.    

His company is behind the popular fake videos of actor Tom Cruise that have gone viral on social media. 

 

He says the legal system will play a role in protecting people from being recreated without their permission.  

Graham, who has a law degree from Harvard University, has applied to the U.S. Copyright Office to register the real-life version of himself.            

“We did that because you’re looking for legal institutions that exist today, that could give you some kind of protection or remedy,” Graham explained, “It’s just, if there’s no way to enforce it, then it’s not really a thing.”                                

Gary Marcus is urging the formation of an international organization to oversee and monitor artificial intelligence.   

He emphasized the need to “get a lot of smart people together, from the companies, from the government, but also scientists, philosophers, ethicists…” 

“I think it’s really important that we as a globe, think all these things through,” Marcus concluded, “And don’t just leave it to like 190 governments doing whatever random thing they do without really understanding the science.”     

The popular AI website ChatGPT has gained widespread attention in recent months but is not yet a moneymaker. Its parent company, OpenAI, lost more than $540 million in 2022.     

your ads here!

Elon Musk and Tesla Break Ground on Massive Texas Lithium Refinery

Tesla Inc on Monday broke ground on a Texas lithium refinery that CEO Elon Musk said should produce enough of the battery metal to build about 1 million electric vehicles (EVs) by 2025, making it the largest North American processor of the material. 

The facility will push Tesla outside its core focus of building automobiles and into the complex area of lithium refining and processing, a step Musk said was necessary if the auto giant was to meet its ambitious EV sales targets. 

“As we look ahead a few years, a fundamental choke point in the advancement of electric vehicles is the availability of battery grade lithium,” Musk said at the ground-breaking ceremony on Monday, with dozers and other earth-moving equipment operating in the background. 

Musk said Tesla aimed to finish construction of the factory next year and then reach full production about a year later. 

The move will make Tesla the only major automaker in North America that will refine its own lithium. Currently, China dominates the processing of many critical minerals, including lithium. 

“Texas wants to be able to be self-reliant, not dependent upon any foreign hostile nation for what we need. We need lithium,” Texas Governor Greg Abbott said at the ceremony. 

Musk did not specify the volume of lithium the facility would process each year, although he said the automaker would continue to buy the metal from its vendors, which include Albemarle Corp and Livent Corp. 

“We intend to continue to use suppliers of lithium, so it’s not that Tesla will do all of it,” Musk said. 

Albemarle plans to build a lithium processing facility in South Carolina that will refine 100,000 tons of the metal each year, with construction slated to begin next year and the facility coming online sometime later this decade. 

Musk did not say where Tesla will source the rough form of lithium known as spodumene concentrate that will be processed at the facility, although Tesla has supply deals with Piedmont Lithium Inc and others. 

‘Clean operations’

Tesla said it would eschew the lithium industry’s conventional refining process, which relies on sulfuric acid and other strong chemicals, in favor of materials that were less harsh on the environment, such as soda ash. 

“You could live right in the middle of the refinery and not suffer any ill effect. So they’re very clean operations,” Musk said, although local media reports said some environmental advocates had raised concerns over the facility. 

Monday’s announcement was not the first time that Tesla has attempted to venture into lithium production. Musk in 2020 told shareholders that Tesla had secured rights to 10,000 acres in Nevada where it aimed to produce lithium from clay deposits, which had never been done before on a commercial scale. 

While Musk boasted that the company had developed a proprietary process to sustainably produce lithium from those Nevada clay deposits, Tesla has not yet deployed the process. 

Musk has urged entrepreneurs to enter the lithium refining business, saying it is like “minting money.” 

“We’re begging you. We don’t want to do it. Can someone please?” he said during a conference call last month. 

Tesla said last month a recent plunge in prices of lithium and other commodities would aid Tesla’s bruised margins in the second half of the year. 

The refinery is the latest expansion by Tesla into Texas after the company moved its headquarters there from California in 2021. Musk’s other companies, including SpaceX and The Boring Company, also have operations in Texas. 

SEE ALSO: A related video by VOA’s Arash Arabasadi

“We are proud that he calls Texas home,” Abbott said, saying Tesla and Musk are “Texas’s economic juggernauts.” 

your ads here!

New Twitter Rules Expose Election Offices to Spoof Accounts

Tracking down accurate information about Philadelphia’s elections on Twitter used to be easy. The account for the city commissioners who run elections, @phillyvotes, was the only one carrying a blue check mark, a sign of authenticity.

But ever since the social media platform overhauled its verification service last month, the check mark has disappeared. That’s made it harder to distinguish @phillyvotes from a list of random accounts not run by the elections office but with very similar names.

The election commission applied weeks ago for a gray check mark — Twitter’s new symbol to help users identify official government accounts – but has yet to hear back from the Twitter, commission spokesman Nick Custodio said. It’s unclear whether @phillyvotes is an eligible government account under Twitter’s new rules.

That’s troubling, Custodio said, because Pennsylvania has a primary election May 16 and the commission uses its account to share important information with voters in real time. If the account remains unverified, it will be easier to impersonate – and harder for voters to trust – heading into Election Day.

Impostor accounts on social media are among many concerns election security experts have heading into next year’s presidential election. Experts have warned that foreign adversaries or others may try to influence the election, either through online disinformation campaigns or by hacking into election infrastructure.

Election administrators across the country have struggled to figure out the best way to respond after Twitter owner Elon Musk threw the platform’s verification service into disarray, given that Twitter has been among their most effective tools for communicating with the public.

Some are taking other steps allowed by Twitter, such as buying check marks for their profiles or applying for a special label reserved for government entities, but success has been mixed. Election and security experts say the inconsistency of Twitter’s new verification system is a misinformation disaster waiting to happen.

“The lack of clear, at-a-glance verification on Twitter is a ticking time bomb for disinformation,” said Rachel Tobac, CEO of the cybersecurity company SocialProof Security. “That will confuse users – especially on important days like election days.”

The blue check marks that Twitter once doled out to notable celebrities, public figures, government entities and journalists began disappearing from the platform in April. To replace them, Musk told users that anyone could pay $8 a month for an individual blue check mark or $1,000 a month for a gold check mark as a “verified organization.”

The policy change quickly opened the door for pranksters to pose convincingly as celebrities, politicians and government entities, which could no longer be identified as authentic. While some impostor accounts were clear jokes, others created confusion.

Fake accounts posing as Chicago Mayor Lori Lightfoot, the city’s Department of Transportation and the Illinois Department of Transportation falsely claimed the city was closing one of its main thoroughfares to private traffic. The fake accounts used the same photos, biographical text and home page links as the real ones. Their posts amassed hundreds of thousands of views before being taken down.

Twitter’s new policy invites government agencies and certain affiliated organizations to apply to be labeled as official with a gray check. But at the state and local level, qualifying agencies are limited to “main executive office accounts and main agency accounts overseeing crisis response, public safety, law enforcement, and regulatory issues,” the policy says.

The rules do not mention agencies that run elections. So while the main Philadelphia city government account quickly received its gray check mark last month, the local election commission has not heard back.

Election offices in four of the country’s five most populous counties — Cook County in Illinois, Harris County in Texas, Maricopa County in Arizona and San Diego County — remain unverified, a Twitter search shows. Maricopa, which includes Phoenix, has been targeted repeatedly by election conspiracy theorists as the most populous and consequential county in one of the most closely divided political battleground states.

Some counties contacted by The Associated Press said they have minimal concerns about impersonation or plan to apply for a gray check later, but others said they already have applied and have not heard back from Twitter.

Even some state election offices are waiting for government labels. Among them is the office of Maine Secretary of State Shenna Bellows.

In an April 24 email to Bellows’ communications director reviewed by The Associated Press, a Twitter representative wrote that there was “nothing to do as we continue to manually process applications from around the world.” The representative added in a later email that Twitter stands “ready to swiftly enforce any impersonation, so please don’t hesitate to flag any problematic accounts.”

An email sent to Twitter’s press office and a company safety officer requesting comment was answered only with an autoreply of a poop emoji.

“Our job is to reinforce public confidence,” Bellows told the AP. “Even a minor setback, like no longer being able to ensure that our information on Twitter is verified, contributes to an environment that is less predictable and less safe.”

Some government accounts, including the one representing Pennsylvania’s second-largest county, have purchased blue checks because they were told it was required to continue advertising on the platform.

Allegheny County posts ads for elections and jobs on Twitter, so the blue check mark “was necessary,” said Amie Downs, the county’s communications director.

When anyone can buy verification and when government accounts are not consistently labeled, the check mark loses its meaning, Colorado Secretary of State Jena Griswold said.

Griswold’s office received a gray check mark to maintain trust with voters, but she told the AP she would not buy verification for her personal Twitter account because “it doesn’t carry the same weight” it once did.

Custodio, at the Philadelphia elections commission, said his office would not buy verification either, even if it gets denied a gray check.

“The blue or gold check mark just verifies you as a paid subscriber and does not verify identity,” he said.

Experts and advocates tracking election discourse on social media say Twitter’s changes do not just incentivize bad actors to run disinformation campaigns — they also make it harder for well-meaning users to know what’s safe to share.

“Because Twitter is dropping the ball on verification, the burden will fall on voters to double check that the information they are consuming and sharing is legitimate,” said Jill Greene, voting and elections manager for Common Cause Pennsylvania.

That dampens an aspect of Twitter that until now had been seen as one of its strengths – allowing community members to rally together to elevate authoritative information, said Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public.

“The first rule of a good online community user interface is to ‘help the helpers.’ This is the opposite of that,” Caulfield said. “It takes a community of people who want to help boost good information, and robs them of the tools to make fast, accurate decisions.”

your ads here!

Google Plans to Make Search More ‘Human,’ Says Wall Street Journal

Google is planning to make its search engine more “visual, snackable, personal and human,” with a focus on serving young people globally, The Wall Street Journal reported on Saturday, citing documents.

The move comes as artificial intelligence (AI) applications such as ChatGPT are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

The tech giant will nudge its service further away from “10 blue links,” which is a traditional format of presenting search results and plans to incorporate more human voices as part of the shift, the report said.

At its annual I/O developer conference in the coming week, Google is expected to debut new features that allow users to carry out conversations with an AI program, a project code-named “Magi,” The Wall Street Journal added, citing people familiar with the matter.

Generative AI has become a buzzword this year, with applications capturing the public’s fancy and sparking a rush among companies to launch similar products they believe will change the nature of work.

Google, part of Alphabet Inc., did not immediately respond to Reuters’ request for comment.

your ads here!

Buffett Shares Good News on Profits, AI Thoughts at Meeting

Billionaire Warren Buffett said artificial intelligence may change the world in all sorts of ways, but new technology won’t take away opportunities for investors, and he’s confident America will continue to prosper over time.

Buffett and his partner Charlie Munger are spending all day Saturday answering questions at Berkshire Hathaway’s annual meeting inside a packed Omaha arena.

“New things coming along doesn’t take away the opportunities. What gives you the opportunities is other people doing dumb things,” said Buffett, who had a chance to try out ChatGPT when his friend Bill Gates showed it to him a few months back.

Buffett reiterated his long-term optimism about the prospects for America even with the bitter political divisions today.

“The problem now is that partisanship has moved more towards tribalism, and in tribalism you don’t even hear the other side,” he said.

Both Buffett and Munger said the United States will benefit from having an open trading relationship with China, so both countries should be careful not to exacerbate the tensions between them because the stakes are too high for the world.

“Everything that increases the tension between these two countries is stupid, stupid, stupid,” Munger said. And whenever either country does something stupid, he said the other country should respond with incredible kindness.

The chance to listen to the two men answer all sorts of questions about business and life attracts people from all over the world to Omaha, Nebraska. Some of the shareholders feel a particular urgency to attend now because Buffett and Munger are both in their 90s.

“Charlie Munger is 99. I just wanted to see him in person. It’s on my bucket list,” said 40-year-old Sheraton Wu from Vancouver. “I have to attend while I can.”

“It’s a once in a lifetime opportunity,” said Chloe Lin, who traveled from Singapore to attend the meeting for the first time and learn from the two legendary investors.

One of the few concessions Buffett makes to his age is that he no longer tours the exhibit hall before the meeting. In years past, he would be mobbed by shareholders trying to snap a picture with him while a team of security officers worked to manage the crowd. Munger has used a wheelchair for several years, but both men are still sharp mentally.

But in a nod to the concerns about their age, Berkshire showed a series of clips of questions about succession from past meetings dating back to the first one they filmed in 1994. Two years ago, Buffett finally said that Greg Abel will eventually replace him as CEO although he has no plans to retire. Abel already oversees all of Berkshire’s noninsurance businesses.

Buffett assured shareholders that he has total confidence in Abel to lead Berkshire in the future, and he doesn’t have a second choice for the job because Abel is remarkable in his own right. But he said much of what Abel will have to do is just maintain Berkshire’s culture and keep making similar decisions.

“Greg understands capital allocation as well as I do. He will make these decisions on the same framework that I use,” Buffett said.

Abel followed that up by assuring the crowd that he knows how Buffett and Munger have handled things for nearly six decades and “I don’t really see that framework changing.”

Although not everyone at the meeting is a fan. Outside the arena, pilots from Berkshire’s NetJets protested over the lack of a new contract and pro-life groups carried signs declaring “Buffett’s billions kill millions” to object to his many charitable donations to abortion rights groups.

Berkshire Hathaway said Saturday morning that it made $35.5 billion, or $24,377 per Class A share, in the first quarter. That’s more than 6 times last year’s $5.58 billion, or $3,784 per share.

But Buffett has long cautioned that those bottom line figures can be misleading for Berkshire because the wide swings in the value of its investments — most of which it rarely sells — distort the profits. In this quarter, Berkshire sold only $1.7 billion of stocks while recording a $27.4 billion paper investment gain. Part of this year’s investment gains included a $2.4 billion boost related to Berkshire’s planned acquisition of the majority of the Pilot Travel Centers truck stop company’s shares in January.

Buffett says Berkshire’s operating earnings that exclude investments are a better measure of the company’s performance. By that measure, Berkshire’s operating earnings grew nearly 13% to $8.065 billion, up from $7.16 billion a year ago.

The three analysts surveyed by FactSet expected Berkshire to report operating earnings of $5,370.91 per Class A share.

Buffett came close to giving a formal outlook Saturday when he told shareholders that he expects Berkshire’s operating profits to grow this year even though the economy is slowing down and many of its businesses will sell less in 2023. He said Berkshire will profit from rising interest rates on its holdings, and the insurance market looks good this year.

This year’s first quarter was relatively quiet compared to a year ago when Buffett revealed that he had gone on a $51 billion spending spree at the start of last year, snapping up stocks like Occidental Petroleum, Chevron and HP. Buffett’s buying slowed through the rest of last year with the exception of a number of additional Occidental purchases.

At the end of this year’s first quarter, Berkshire held $130.6 billion cash, up from about $128.59 billion at the end of last year. But Berkshire did spend $4.4 billion during the quarter to repurchase its own shares.

Berkshire’s insurance unit, which includes Geico and a number of large reinsurers, recorded a $911 million operating profit, up from $167 million last year, driven by a rebound in Geico’s results. Geico benefitted from charging higher premiums and a reduction in advertising spending and claims.

But Berkshire’s BNSF railroad and its large utility unit did report lower profits. BNSF earned $1.25 billion, down from $1.37 billion, as the number of shipments it handled dropped 10% after it lost a big customer and imports slowed at the West Coast ports. The utility division added $416 million, down from last year’s $775 million.

Besides those major businesses, Berkshire owns an eclectic assortment of dozens of other businesses, including a number of retail and manufacturing firms such as See’s Candy and Precision Castparts.

Berkshire again faces pressure from activist investors urging the company to do more to catalog its climate change risks in a companywide report. Shareholders were expected to brush that measure and all the other shareholder proposals aside Saturday afternoon because Buffett and the board oppose them, and Buffett controls more than 30% of the vote.

But even as they resist detailing climate risks, a number of Berkshire’s subsidiaries are working to reduce their carbon emissions, including its railroad and utilities. The company’s Clayton Homes unit is showing off a new home design this year that will meet strict energy efficiency standards from the Department of Energy and come pre-equipped for solar power to be added later.

your ads here!

Could AI Pen ‘Casablanca’? Screenwriters Take Aim at ChatGPT

When Greg Brockman, the president and co-founder of ChatGPT maker OpenAI, was recently extolling the capabilities of artificial intelligence, he turned to “Game of Thrones.”

Imagine, he said, if you could use AI to rewrite the ending of that not-so-popular finale. Maybe even put yourself into the show.

“That is what entertainment will look like,” said Brockman.

Not six months since the release of ChatGPT, generative artificial intelligence is already prompting widespread unease throughout Hollywood. Concern over chatbots writing or rewriting scripts is one of the leading reasons TV and film screenwriters took to picket lines earlier this week.

Though the Writers Guild of America is striking for better pay in an industry where streaming has upended many of the old rules, AI looms as rising anxiety.

“AI is terrifying,” said Danny Strong, the “Dopesick” and “Empire” creator. “Now, I’ve seen some of ChatGPT’s writing and as of now I’m not terrified because Chat is a terrible writer. But who knows? That could change.”

AI chatbots, screenwriters say, could potentially be used to spit out a rough first draft with a few simple prompts (“a heist movie set in Beijing”). Writers would then be hired, at a lower pay rate, to punch it up.

Screenplays could also be slyly generated in the style of known writers. What about a comedy in the voice of Nora Ephron? Or a gangster film that sounds like Mario Puzo? You won’t get anything close to “Casablanca” but the barest bones of a bad Liam Neeson thriller isn’t out of the question.

The WGA’s basic agreement defines a writer as a “person” and only a human’s work can be copyrighted. But even though no one’s about to see a “By AI” writers credit at the beginning a movie, there are myriad ways that regenerative AI could be used to craft outlines, fill in scenes and mockup drafts.

“We’re not totally against AI,” says Michael Winship, president of the WGA East and a news and documentary writer. “There are ways it can be useful. But too many people are using it against us and using it to create mediocrity. They’re also in violation of copyright. They’re also plagiarizing.”

The guild is seeking more safeguards on how AI can be applied to screenwriting. It says the studios are stonewalling on the issue. The Alliance of Motion Picture and Television Producers, which bargains on the behalf of production companies, has offered to annually meet with the guild to go over definitions around the fast-evolving technology.

“It’s something that requires a lot more discussion, which we’ve committed to doing,” the AMPTP said in an outline of its position released Thursday.

Experts say the struggle screenwriters are now facing with regenerative AI is just the beginning. The World Economic Forum this week released a report predicting that nearly a quarter of all jobs will be disrupted by AI over the next five years.

“It’s definitely a bellwether in the workers’ response to the potential impacts of artificial intelligence on their work,” says Sarah Myers West, managing director of the nonprofit AI Now Institute, which has lobbied the government to enact more regulation around AI. “It’s not lost on me that a lot of the most meaningful efforts in tech accountability have been a product of worker-led organizing.”

AI has already filtered into nearly every part of moviemaking. It’s been used to de-age actors, remove swear words from scenes in post-production, supply viewing recommendations on Netflix and posthumously bring back the voices of Anthony Bourdain and Andy Warhol.

The Screen Actors Guild, set to begin its own bargaining with the AMPTP this summer, has said it’s closely following the evolving legal landscape around AI.

“Human creators are the foundation of the creative industries, and we must ensure that they are respected and paid for their work,” the actors union said.

The implications for screenwriting are only just being explored. Actors Alan Alda and Mike Farrell recently reconvened to read through a new scene from “M(asterisk)A(asterisk)S(asterisk)H” written by ChatGPT. The results weren’t terrible, though they weren’t so funny, either.

“Why have a robot write a script and try to interpret human feelings when we already have studio executives who can do that?” deadpanned Alda.

Writers have long been among notoriously exploited talents in Hollywood. The films they write usually don’t get made. If they do, they’re often rewritten many times over. Raymond Chandler once wrote “the very nicest thing Hollywood can possibly think to say to a writer is that he is too good to be only a writer.”

Screenwriters are accustomed to being replaced. Now, they see a new, readily available and inexpensive competitor in AI — albeit one with a slightly less tenuous grasp of the human condition.

“Obviously, AI can’t do what writers and humans can do. But I don’t know that they believe that, necessarily,” says screenwriter Jonterri Gadson (“A Black Lady Sketchshow”). “There needs to be a human writer in charge and we’re not trying to be gig workers, just revising what AI does. We need to tell the stories.”

Dramatizing their plight as man vs. machine surely doesn’t hurt the WGA’s cause in public opinion. The writers are wrestling with the threat of AI just as concern widens over how hurriedly regenerative AI products have been thrust into society.

Geoffrey Hinton, an AI pioneer, recently left Google in order to speak freely about its potential dangers. “It’s hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times.

“What’s especially scary about it is nobody, including a lot of the people who are involved with creating it, seem to be able to explain exactly what it’s capable of and how quickly it will be capable of more,” says actor-screenwriter Clark Gregg.

The writers find themselves in the awkward position of negotiating on a newborn technology with the potential for radical effect. Meanwhile, AI-crafted songs by “Fake Drake” or “Fake Eminem” continue to circulate online.

“They’re afraid that if the use of AI to do all this becomes normalized, then it becomes very hard to stop the train,” says James Grimmelmann, a professor of digital and information law at Cornell University. “The guild is in the position of trying to imagine lots of different possible futures.”

In the meantime, chanting demonstrators are hoisting signs with messages aimed at a digital foe. Seen on the picket lines: “ChatGPT doesn’t have childhood trauma”; “I heard AI refuses to take notes”; and “Wrote ChatGPT this.”

your ads here!

Hate Passwords? You’re in Luck — Google Is Sidelining Them

Good news for all the password-haters out there: Google has taken a big step toward making them an afterthought by adding “passkeys” as a more straightforward and secure way to log into its services. 

Here’s what you need to know: 

What are passkeys?  

Passkeys offer a safer alternative to passwords and texted confirmation codes. Users won’t ever see them directly; instead, an online service like Gmail will use them to communicate directly with a trusted device such as your phone or computer to log you in. 

All you’ll have to do is verify your identity on the device using a PIN unlock code, biometrics such as your fingerprint or a face scan or a more sophisticated physical security dongle. 

Google designed its passkeys to work with a variety of devices, so you can use them on iPhones, Macs and Windows computers, as well as Google’s own Android phones. 

Why are passkeys necessary?  

Thanks to clever hackers and human fallibility, passwords are just too easy to steal or defeat. And making them more complex just opens the door to users defeating themselves. 

For starters, many people choose passwords they can remember — and easy-to-recall passwords are also easy to hack. For years, analysis of hacked password caches found that the most common password in use was “password123.” A more recent study by the password manager NordPass found that it’s now just “password.” This isn’t fooling anyone. 

Passwords are also frequently compromised in security breaches. Stronger passwords are more secure, but only if you choose ones that are unique, complex and non-obvious. And once you’ve settled on “erVex411$%” as your password, good luck remembering it. 

In short, passwords put security and ease of use directly at odds. Software-based password managers, which can create and store complex passwords for you, are valuable tools that can improve security. But even password managers have a master password you need to protect, and that plunges you back into the swamp. 

In addition to sidestepping all those problems, passkeys have one additional advantage over passwords. They’re specific to particular websites, so scammer sites can’t steal a passkey from a dating site and use it to raid your bank account. 

How do I start using passkeys?  

The first step is to enable them for your Google account. On any trusted phone or computer, open the browser and sign into your Google account. Then visit the page g.co/passkeys and click the option to “start using passkeys.” Voila! The passkey feature is now activated for that account. 

If you’re on an Apple device, you’ll first be prompted to set up the Keychain app if you’re not already using it; it securely stores passwords and now passkeys, as well. 

The next step is to create the actual passkeys that will connect your trusted device. If you’re using an Android phone that’s already logged into your Google account, you’re most of the way there; Android phones are automatically ready to use passkeys, though you still have to enable the function first. 

On the same Google account page noted above, look for the “Create a passkey” button. Pressing it will open a window and let you create a passkey either on your current device or on another device. There’s no wrong choice; the system will simply notify you if that passkey already exists. 

If you’re on a PC that can’t create a passkey, it will open a QR code that you can scan with the ordinary cameras on iPhones and Android devices. You may have to move the phone closer until the message “Set up passkey” appears on the image. Tap that and you’re on your way. 

And then what?  

From that point on, signing into Google will only require you to enter your email address. If you’ve gotten passkeys set up properly, you’ll simply get a message on your phone or other device asking you to for your fingerprint, your face or a PIN.

Of course, your password is still there. But if passkeys take off, odds are good you won’t be needing it very much. You may even choose to delete it from your account someday. 

your ads here!

‘Godfather of AI’ Quits Google to Warn of the Technology’s Dangers

A computer scientist often dubbed “the godfather of artificial intelligence” has quit his job at Google to speak out about the dangers of the technology, U.S. media reported Monday.

Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity”.

“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary.”

Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The Times.

Jobs could be at risk

In 2022, Google and OpenAI — the startup behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.

Hinton told The Times he believed these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.

While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.

AI “takes away the drudge work” but “might take away more than that,” he told The Times.

Concern about misinformation

The scientist also warned about the potential spread of misinformation created by AI, telling The Times that the average person will “not be able to know what is true anymore.”

Hinton notified Google of his resignation last month, The Times reported.

Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to U.S. media.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.

“We’re continually learning to understand emerging risks while also innovating boldly.”

In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.

An open letter, signed by more than 1,000 people. including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.

Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”

your ads here!

EU Tech Tsar Vestager Sees Political Agreement on AI Law This Year 

The European Union is likely to reach a political agreement this year that will pave the way for the world’s first major artificial intelligence (AI) law, the bloc’s tech regulation chief, Margrethe Vestager, said on Sunday.

This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU’s Artificial Intelligence Act to a vote on May 11. Parliament will then thrash out the bill’s final details with EU member states and the European Commission before it becomes law.

At a press conference after a Group of Seven digital ministers’ meeting in Takasaki, Japan, Vestager said the EU AI Act was “pro-innovation” since it seeks to mitigate the risks of societal damage from emerging technologies.

Regulators around the world have been trying to find a balance where governments could develop “guardrails” on emerging artificial intelligence technology without stifling innovation.

“The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself,” Vestager said.

While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.

“There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence,” she told Reuters in an interview.

While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI’S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.

An organization backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.

Digital ministers of the G-7 advanced nations on Sunday also agreed to adopt “risk-based” regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.

“It is important that our democracy paved the way and put in place the rules to protect us from its abusive manipulation – AI should be useful but it shouldn’t be manipulating us,” said German Transport Minister Volker Wissing.

This year’s G-7 meeting was also attended by representatives from Indonesia, India and Ukraine.

your ads here!

UK Blocks Microsoft-Activision Gaming Deal, Biggest in Tech

British antitrust regulators on Wednesday blocked Microsoft’s $69 billion purchase of video game maker Activision Blizzard, thwarting the biggest tech deal in history over worries that it would stifle competition for popular titles like Call of Duty in the fast-growing cloud gaming market.

The Competition and Markets Authority said in its final report that “the only effective remedy” to the substantial loss of competition “is to prohibit the Merger.” The companies have vowed to appeal.

The all-cash deal faced stiff opposition from rival Sony, which makes the PlayStation gaming system, and also was being scrutinized by regulators in the U.S. and Europe over fears that it would give Microsoft and its Xbox console control of hit franchises like Call of Duty and World of Warcraft.

The U.K. watchdog’s concerns centered on how the deal would affect cloud gaming, which streams to tablets, phones and other devices and frees players from buying expensive consoles and gaming computers. Gamers can keep playing major Activision titles, including mobile games like Candy Crush, on the platforms they typically use.

Cloud gaming has the potential to change the industry by giving people more choice over how and where they play, said Martin Colman, chair of the Competition and Markets Authority’s independent expert panel investigating the deal.

“This means that it is vital that we protect competition in this emerging and exciting market,” he said.

The decision underscores Europe’s reputation as the global leader in efforts to rein in the power of Big Tech companies. A day earlier, the U.K. government unveiled draft legislation that would give regulators more power to protect consumers from online scams and fake reviews and boost digital competition.

The U.K. decision further dashes Microsoft’s hopes that a favorable outcome could help it resolve a lawsuit brought by the U.S. Federal Trade Commission. A trial before FTC’s in-house judge is set to begin Aug. 2. The European Union’s decision, meanwhile, is due May 22.

Activision lashed out, portraying the watchdog’s decision as a bad signal to international investors in the United Kingdom at a time when the British economy faces severe challenges.

The game maker said it would “work aggressively” with Microsoft to appeal, asserting that the move “contradicts the ambitions of the U.K.” to be an attractive place for tech companies.

“We will reassess our growth plans for the U.K. Global innovators large and small will take note that — despite all its rhetoric — the U.K. is clearly closed for business,” Activision said.

Redmond, Washington-based Microsoft also signaled it wasn’t ready to give up.

“We remain fully committed to this acquisition and will appeal,” President Brad Smith said in a statement. The decision “rejects a pragmatic path to address competition concerns” and discourages tech innovation and investment in Britain, he said.

“We’re especially disappointed that after lengthy deliberations, this decision appears to reflect a flawed understanding of this market and the way the relevant cloud technology actually works,” Smith said.

It’s not the first time British regulators have flexed their antitrust muscles on a Big Tech deal. They previously blocked Facebook parent Meta’s purchase of Giphy over fears it would limit innovation and competition. The social media giant appealed the decision to a tribunal but lost and was forced to sell off the GIF sharing platform.

When it comes to gaming, Microsoft already has a strong position in the cloud computing market, and regulators concluded that if the deal went through, it would reinforce the company’s advantage by giving it control of key game titles.

In an attempt to ease concerns, Microsoft struck deals with Nintendo and some cloud gaming providers to license Activision titles like Call of Duty for 10 years — offering the same to Sony.

The watchdog said it reviewed Microsoft’s remedies “in considerable depth” but found they would require its oversight, whereas preventing the merger would allow cloud gaming to develop without intervention.

your ads here!

Twitter Changes Stoke Russian, Chinese Propaganda Surge

Twitter accounts operated by authoritarian governments in Russia, China and Iran are benefiting from recent changes at the social media company, researchers said Monday, making it easier for them to attract new followers and broadcast propaganda and disinformation to a larger audience. 

The platform is no longer labeling state-controlled media and propaganda agencies, and will no longer prohibit their content from being automatically promoted or recommended to users. Together, the two changes, both made in recent weeks, have supercharged the Kremlin’s ability to use the U.S.-based platform to spread lies and misleading claims about its invasion of Ukraine, U.S. politics and other topics. 

Russian state media accounts are now earning 33% more views than they were just weeks ago, before the change was made, according to findings released Monday by Reset, a London-based non-profit that tracks authoritarian governments’ use of social media to spread propaganda. Reset’s findings were first reported by The Associated Press. 

The increase works out to more than 125,000 additional views per post. Those posts included ones suggesting the CIA had something to do with the September 11, 2001, attacks on the U.S., that Ukraine’s leaders are embezzling foreign aid to their country, and that Russia’s invasion of Ukraine was justified because the U.S. was running clandestine biowarfare labs in the country. 

State media agencies operated by Iran and China have seen similar increases in engagement since Twitter quietly made the changes. 

The about-face from the platform is the latest development since billionaire Elon Musk purchased Twitter last year. Since then, he’s ushered in a confusing new verification system and laid off much of the company’s staff, including those dedicated to fighting misinformation, allowed back neo-Nazis and others formerly suspended from the site, and ended the site’s policy prohibiting dangerous COVID-19 misinformation. Hate speech and disinformation have thrived. 

Before the most recent change, Twitter affixed labels reading “Russia state-affiliated media” to let users know the origin of the content. It also throttled back the Kremlin’s online engagement by making the accounts ineligible for automatic promotion or recommendation—something it regularly does for ordinary accounts as a way to help them reach bigger audiences. 

The labels quietly disappeared after National Public Radio and other outlets protested Musk’s plans to label their outlets as state-affiliated media, too. NPR then announced it would no longer use Twitter, saying the label was misleading, given NPR’s editorial independence, and would damage its credibility. 

Reset’s conclusions were confirmed by the Atlantic Council’s Digital Forensic Research Lab (DFRL), where researchers determined the changes were likely made by Twitter late last month. Many of the dozens of previously labeled accounts were steadily losing followers since Twitter began using the labels. But after the change, many accounts saw big jumps in followers. 

RT Arabic, one of Russia’s most popular propaganda accounts on Twitter, had fallen to less than 5,230,000 followers on January 1, but rebounded after the change was implemented, the DFRL found. It now has more than 5,240,000 followers. 

Before the change, users interested in seeking out Kremlin propaganda had to search specifically for the account or its content. Now, it can be recommended or promoted like any other content. 

“Twitter users no longer must actively seek out state-sponsored content in order to see it on the platform; it can just be served to them,” the DFRL concluded. 

Twitter did not respond to questions about the change or the reasons behind it. Musk has made past comments suggesting he sees little difference between state-funded propaganda agencies operated by authoritarian strongmen and independent news outlets in the west.

“All news sources are partially propaganda,” he tweeted last year, “some more than others.”

your ads here!

Writer, Adviser, Poet, Bot: How ChatGPT Could Transform Politics

The AI bot ChatGPT has passed exams, written poetry, and deployed in newsrooms, and now politicians are seeking it out — but experts are warning against rapid uptake of a tool also famous for fabricating “facts.”

The chatbot, released last November by U.S. firm OpenAI, has quickly moved center stage in politics — particularly as a way of scoring points.

Japanese Prime Minister Fumio Kishida recently took a direct hit from the bot when he answered some innocuous questions about health care reform from an opposition MP.

Unbeknownst to the PM, his adversary had generated the questions with ChatGPT. He also generated answers that he claimed were “more sincere” than Kishida’s.

The PM hit back that his own answers had been “more specific.”

French trade union boss Sophie Binet was on-trend when she drily assessed a recent speech by President Emmanuel Macron as one that “could have been done by ChatGPT.”

But the bot has also been used to write speeches and even help draft laws. 

“It’s useful to think of ChatGPT and generative AI in general as a cliche generator,” David Karpf of George Washington University in the U.S. said during a recent online panel. 

“Most of what we do in politics is also cliche generation.”

‘Limited added value’

Nowhere has the enthusiasm for grandstanding with ChatGPT been keener than in the United States.

Last month, Congresswoman Nancy Mace gave a five-minute speech at a Senate committee enumerating potential uses and harms of AI — before delivering the punchline that “every single word” had been generated by ChatGPT.

Local U.S. politician Barry Finegold had already gone further though, pronouncing in January that his team had used ChatGPT to draft a bill for the Massachusetts Senate.

The bot reportedly introduced original ideas to the bill, which is intended to rein in the power of chatbots and AI.

Anne Meuwese from Leiden University in the Netherlands wrote in a column for Dutch law journal RegelMaat last week that she had carried out a similar experiment with ChatGPT and also found that the bot introduced original ideas.

But while ChatGPT was to some extent capable of generating legal texts, she wrote that lawmakers should not fall over each other to use the tool.

“Not only is much still unclear about important issues such as environmental impact, bias and the ethics at OpenAI … the added value also seems limited for now,” she wrote.

Agitprop bots

The added value might be more obvious lower down the political food chain, though, where staffers on the campaign trail face a treadmill of repetitive tasks.

Karpf suggested AI could be useful for generating emails asking for donations — necessary messages that were not intended to be masterpieces.

This raises an issue of whether the bots can be trained to represent a political point of view.

ChatGPT has already provoked a storm of controversy over its apparent liberal bias — the bot initially refused to write a poem praising Donald Trump but happily churned out couplets for his successor as U.S. President Joe Biden.

Billionaire magnate Elon Musk has spied an opportunity. Despite warning that AI systems could destroy civilization, he recently promised to develop TruthGPT, an AI text tool stripped of the perceived liberal bias.

Perhaps he needn’t have bothered. New Zealand researcher David Rozado already ran an experiment retooling ChatGPT as RightWingGPT — a bot on board with family values, liberal economics and other right-wing rallying cries.

“Critically, the computational cost of trialling, training and testing the system was less than $300,” he wrote on his Substack blog in February.

Not to be outdone, the left has its own “Marxist AI.”

The bot was created by the founder of Belgian satirical website Nordpresse, who goes by the pseudonym Vincent Flibustier.

He told AFP his bot just sends queries to ChatGPT with the command to answer as if it were an “angry trade unionist.”

The malleability of chatbots is central to their appeal but it goes hand-in-hand with the tendency to generate untruths, making AI text generators potentially hazardous allies for the political class.

“You don’t want to become famous as the political consultant or the political campaign that blew it because you decided that you could have a generative AI do [something] for you,” said Karpf. 

your ads here!