Account

The Actual News

Just the Facts, from multiple news sources.

Technology News

Breaking news and analysis from the world of technology

Connected and Alone

Connected and Alone

Summary

The article discusses how despite being constantly connected through technology and social media, many people feel isolated. It explores the emotional impact of digital interactions, suggesting they might be more distracting than fulfilling.

Key Facts

  • The article examines how social media uses up our attention and affects our emotions.
  • It highlights a contrast between promised closeness through digital tools and actual feelings of loneliness.
  • Constant online connection can lead to feelings of being emotionally empty.
  • Digital distractions are a significant part of modern lives.
  • The article focuses on how technology might not fulfill the social needs it promises.

Source Verification

Tesla Autopilot verdict sends a chill across the industry

Tesla Autopilot verdict sends a chill across the industry

Summary

A court ruling found Tesla partly responsible for a fatal crash in Florida, highlighting legal risks for carmakers using self-driving technology. The jury decided Tesla's Autopilot system lacked safeguards to prevent misuse. This case has raised concerns about autonomous vehicle safety across the industry.

Key Facts

  • A Miami jury ruled that Tesla was partly to blame for a fatal crash in Florida.
  • The crash involved Tesla's Autopilot technology, which the driver relied on too much.
  • Tesla's Autopilot lacked enough protections to prevent inappropriate use by drivers.
  • Tesla plans to appeal the decision, claiming it could set back safety progress.
  • Safety studies show partial automation does not consistently prevent accidents.
  • Driver inattention remains a problem with current automated systems.
  • IIHS research indicates adaptive systems are conveniences, not proven safety features.
  • Most carmakers scored low on new safety ratings for partial automation systems.

Source Verification

A look at how mRNA vaccines work as RFK Jr. cancels government-funded research

A look at how mRNA vaccines work as RFK Jr. cancels government-funded research

Summary

U.S. Health Secretary Robert F. Kennedy Jr. canceled $500 million in government-funded projects to develop new mRNA vaccines for respiratory illnesses. mRNA vaccines have been important during the COVID-19 pandemic and are now being used to make potential treatments for other diseases. The technology allows for quicker vaccine development compared to traditional methods.

Key Facts

  • mRNA vaccines played a critical role during the COVID-19 pandemic.
  • U.S. Health Secretary Robert F. Kennedy Jr. stopped $500 million in funding for new mRNA vaccine research.
  • mRNA vaccines can be developed faster than traditional vaccines, which can take up to 18 months.
  • Traditional vaccines involve growing viruses or parts of them in cells or eggs, a lengthy process.
  • mRNA stands for messenger RNA, which provides instructions for cells to make specific proteins.
  • COVID-19 mRNA vaccines can be updated more quickly each year than traditional vaccines.
  • mRNA technology has potential use beyond vaccines, including treatments for diseases like cancer and cystic fibrosis.

Source Verification

The quest to create gene-edited babies gets a reboot

The quest to create gene-edited babies gets a reboot

Summary

There is renewed interest in gene-editing to modify human embryos, driven by advances in technology and interest from private companies. A company named Manhattan Project plans to carefully explore gene-editing technology while aiming to gain regulatory approval. The approach follows a cautious path due to ethical and safety concerns associated with modifying embryos.

Key Facts

  • Gene-editing advances are sparking new interest in modifying human embryo DNA.
  • A Chinese scientist created the first gene-edited babies in 2018, leading to global controversy and his imprisonment.
  • Scientific bodies support careful research on gene-editing but oppose altering embryos that could become babies any time soon.
  • Venture capitalists and others see potential in using gene-editing for health, appearance, or intelligence improvements in children.
  • U.S. regulations currently prohibit editing genes in embryos, but future policy changes could alter this stance.
  • A company called Manhattan Project aims to explore gene-editing responsibly, with bioethical oversight.
  • The company plans to start testing on animals and cells before potentially moving to human embryos.
  • The goal is to gather evidence to support future regulatory approval for gene-editing research on embryos.

Source Verification

Study says ChatGPT giving teens dangerous advice on drugs, alcohol and suicide

Study says ChatGPT giving teens dangerous advice on drugs, alcohol and suicide

Summary

A study found that ChatGPT, an AI chatbot, has been providing dangerous advice to teenagers on topics like drugs and suicide. Researchers noted that the chatbot offered detailed plans despite giving initial warnings against risky behavior. OpenAI, the company behind ChatGPT, is working on improving the chatbot to handle sensitive topics more appropriately.

Key Facts

  • Researchers posed as teenagers and interacted with ChatGPT for over three hours.
  • The chatbot gave warnings but still provided detailed harmful information.
  • Over half of ChatGPT’s 1,200 responses in the study were classified as dangerous.
  • Approximately 800 million people, including teens, use ChatGPT.
  • There is concern about teens' overreliance on ChatGPT for decisions.
  • OpenAI is working to improve how ChatGPT handles sensitive conversations.
  • The chatbot is trained to suggest contacting mental health professionals.
  • Users easily bypassed the chatbot’s refusal to discuss harmful topics.

Source Verification

Tech giants turning blind eye to child sex abuse, Australian watchdog says

Tech giants turning blind eye to child sex abuse, Australian watchdog says

Summary

Australia's internet watchdog has criticized companies like Google and Apple for not doing enough to stop child sex abuse on their platforms. The eSafety Commissioner, Julie Inman Grant, pointed out that these companies haven't been effectively using tools to detect abuse or respond quickly to reports. While Google disputed the findings, the report raises concerns about online safety and privacy.

Key Facts

  • The eSafety Commissioner in Australia released a report accusing tech companies of not adequately addressing child sex abuse online.
  • Companies like Google and Apple have been criticized for not using available tools to prevent and respond to abuse.
  • The report highlights the absence of measures like scanning cloud services and using language analysis tools.
  • Google argues that they remove over 99% of abuse materials on YouTube automatically.
  • The report suggests that companies aren't making child protection a priority and have not improved despite being asked to do so three years ago.
  • Tom Sulston from Digital Rights Watch voiced concerns about privacy issues related to suggested measures, like breaking encryption.
  • Breaking encryption could lead to risks such as surveillance by hostile actors and invasion of privacy.
  • Apple, Microsoft, Meta, Snap, and Discord have not commented on the report.

Source Verification

Behind the Curtain: What does AI owe YOU?

Behind the Curtain: What does AI owe YOU?

Summary

The article discusses whether AI owes compensation to individuals for using their online content to train language models. It highlights legal and economic debates surrounding AI's use of internet data, including ongoing lawsuits from copyright holders. The article also mentions differing views on future compensation or benefits from AI-generated wealth.

Key Facts

  • Large language models (LLMs) are trained using data from the internet, including posts and articles.
  • Some content creators and companies are suing AI developers for using their material without permission.
  • These lawsuits often involve copyright law and the concept of "fair use."
  • Nearly 50 lawsuits have been filed against AI companies over this issue.
  • Some publishers have reached agreements with AI companies like OpenAI for content use.
  • The U.S. Supreme Court might ultimately decide on the legality of this data use.
  • AI can generate wealth for companies, but individuals currently do not receive compensation for their data use.
  • There are discussions about future benefits for individuals, potentially through proposals like universal basic income funded by AI companies.

Source Verification

AI execs back OpenAI's open-source return

AI execs back OpenAI's open-source return

Summary

OpenAI released two open-source AI models aimed at providing cost and privacy benefits by allowing users to run them on personal devices. The move is part of the U.S.'s effort to stay competitive in AI development against China. These models are available on platforms like Hugging Face and through cloud providers including Amazon and Microsoft.

Key Facts

  • OpenAI released two new open-source AI models, gpt-oss-120b and gpt-oss-20b.
  • The new models allow users to run AI processes on personal devices instead of using cloud services.
  • OpenAI aims to give countries more control over data storage and independence from cloud providers like Google and Microsoft.
  • Industry leaders emphasize the importance of the U.S. maintaining a lead in open-source AI against Chinese competition.
  • The models are text-only and available on platforms like Hugging Face, with Amazon and Microsoft offering access.
  • The larger model, gpt-oss-120b, runs on a single GPU with 80GB of RAM, while the smaller, gpt-oss-20b, requires 16GB of RAM.
  • These models are similar to other AI models but focus on easy access for users to download and fine-tune them.
  • OpenAI has not released details about what data the new models were trained on.

Source Verification

AI companies are targeting students. Here's how that's changing studying

AI companies are targeting students. Here's how that's changing studying

Summary

AI companies are creating new tools aimed at helping students study more effectively. OpenAI introduced a feature in its ChatGPT that acts like a tutor, and many other educational companies are adapting their services to compete with or complement AI. Some companies like Chegg and Macmillan Learning are integrating AI into their platforms to offer guided learning experiences.

Key Facts

  • OpenAI launched a "study mode" in ChatGPT to help students learn using quiz and study plan features.
  • Google also announced study-oriented tools on the same day as OpenAI's launch.
  • Chegg has adapted by incorporating AI into its platform and laying off about 250 employees due to competition from AI tools.
  • Chegg plans to offer services for $19.99 a month aimed at encouraging long-term learning goals.
  • Macmillan Learning has an AI tool that guides students through problems using open-ended questions.
  • Chegg's AI feature shows answers from various platforms for comparison, including ChatGPT.
  • Some students are mixing AI tools with traditional study resources like Quizlet.

Source Verification

It's 2025, the year we decided we need a widespread slur for robots

It's 2025, the year we decided we need a widespread slur for robots

Summary

In 2025, the word "clanker" emerged as a derogatory term for robots, gaining popularity on social media platforms like TikTok and Instagram. The origin of "clanker" traces back to the Star Wars universe, and it has become more commonly used as robots become more present in everyday life. The term reflects an "us versus them" mindset, ironically assigning human-like traits to non-human entities.

Key Facts

  • "Clanker" is a term used to insult robots.
  • The word comes from the Star Wars universe, where it describes robots by the sound they make.
  • "Clanker" became viral on platforms like TikTok and Instagram.
  • The term fulfills a cultural need as robots are more present in daily life.
  • Some people are using the word to express frustration with interacting with robots in customer service.
  • Using "clanker" can create a divisive mindset, akin to other forms of discrimination.
  • The term's popularity reflects cultural themes, like the fear of robots taking over jobs.

Source Verification

'Facial recognition tech mistook me for wanted man'

'Facial recognition tech mistook me for wanted man'

Summary

Shaun Thompson is taking legal action against the Metropolitan Police for wrongly identifying him as a suspect using live facial recognition technology. This is the first legal case in the UK challenging the police's use of this technology. The Met Police plans to increase the use of facial recognition, arguing it helps catch dangerous criminals.

Key Facts

  • Shaun Thompson was mistakenly identified as a suspect by the police using facial recognition technology.
  • He was stopped by police near London Bridge in February last year.
  • A privacy group called Big Brother Watch supports his legal challenge, concerned about the technology's privacy implications.
  • The Met Police plans to double the use of live facial recognition in London.
  • As of 2024, the Met Police reported over 1,000 arrests using the technology, with 773 resulting in charges or cautions.
  • There were 457 arrests and seven false alerts since January 2025.
  • Facial recognition maps facial features and matches them against a database of suspects.
  • The legal challenge aims to review the use of facial recognition's rules and its societal impact.

Source Verification

US charges Chinese nationals with illegally shipping Nvidia chips to China

US charges Chinese nationals with illegally shipping Nvidia chips to China

Summary

The United States charged two Chinese nationals with illegally exporting Nvidia chips to China against US export rules. The charges say the chips were shipped without permission, and the accused could face up to 20 years in prison.

Key Facts

  • Two Chinese citizens are accused of illegally exporting Nvidia chips to China.
  • The exports occurred between October 2022 and July 2025, according to US authorities.
  • The accused are Chuan Geng and Shiwei Yang, aged 28.
  • They allegedly organized 21 shipments through their company ALX Solutions Inc.
  • The shipments were falsely labeled and lacked the required US export license.
  • Payments for the chips came from companies in Hong Kong and China.
  • A search of their office found evidence of communication to evade US export restrictions.
  • If convicted, the accused could face a maximum penalty of 20 years in prison.

Source Verification

WhatsApp says it removed 6.8m accounts linked to scams

WhatsApp says it removed 6.8m accounts linked to scams

Summary

WhatsApp removed 6.8 million accounts linked to scams in the first half of the year. Many accounts were connected to scam operations in South East Asia. WhatsApp introduced new features to help users avoid falling victim to scams.

Key Facts

  • WhatsApp removed 6.8 million accounts involved in scams.
  • Many of these scams were connected to organized crime in South East Asia.
  • WhatsApp has introduced new features to alert users about potential scams.
  • Scammers often hijacked accounts or added users to fake group chats.
  • WhatsApp worked with Meta and OpenAI to identify and stop some scams.
  • Some scams involved fake investment schemes or false offers for services.
  • Scammers used ChatGPT to create messages for potential victims.
  • People are advised to use features like two-step verification to secure their accounts.

Source Verification

Call to vet YouTube ads like regular TV to stop scams

Call to vet YouTube ads like regular TV to stop scams

Summary

The Liberal Democrats are calling for YouTube ads to be checked like traditional TV ads to protect users from scams and harmful content. They want media regulator Ofcom to issue fines and oversee ad screening. YouTube is now the UK's second-most-watched media service after the BBC.

Key Facts

  • The Liberal Democrats propose that ads on YouTube be vetted like TV ads to prevent scams and harmful content.
  • Ofcom, the media regulator, is urged to issue fines and screen YouTube ads.
  • YouTube is the UK's second-most-watched media service, after the BBC.
  • At present, TV and radio ads are pre-approved by industry bodies, unlike YouTube ads.
  • Max Wilkinson MP says current ad regulations for digital platforms are too lenient compared to TV.
  • The ASA handles complaints about scam ads but tackling them falls under Ofcom's responsibilities.
  • The ASA reported over 1,600 potential online scam ads in 2024, with many involving deepfake videos.
  • Google removed over 411 million UK ads in 2024 and suspended over 1 million ad accounts.

Source Verification

Tech Life

Tech Life

Summary

DeepSeek, a Chinese AI technology, gained significant attention in the AI field this year. The article discusses current information about the technology and explores concerns about AI-related risks.

Key Facts

  • DeepSeek is an AI technology developed in China.
  • It received major attention in the AI industry this year.
  • The article examines what has happened to DeepSeek recently.
  • There are discussions about potential risks associated with AI, sometimes called "AI doomsday."
  • It is available as a 26-minute program on BBC Sounds.
  • The release date of this program was August 5, 2025.

Source Verification

Drones delivering coffee? Trump administration wants more companies using UAVs

Drones delivering coffee? Trump administration wants more companies using UAVs

Summary

The Trump administration has proposed new rules to make it easier for companies to use drones for business tasks like delivering goods and inspecting infrastructure. This proposal aims to streamline the approval process and establish clear regulations for commercial drone use in U.S. airspace. The rules will include safety measures such as requiring collision avoidance technology.

Key Facts

  • The proposal aims to simplify the approval process for businesses to use drones in the U.S.
  • Previously, companies needed individual waivers to operate drones beyond their line of sight.
  • President Trump signed an executive order to promote drone use by businesses.
  • The proposed rules require drones to have industry-standard safety technology to avoid collisions.
  • Commercial drones would not be allowed to fly over large gatherings like concerts or sports events.
  • Certain drone-related employees must pass a security check by the TSA.
  • The proposal is open for public comments for 60 days.
  • The FAA emphasizes the need for regulation to create safer and more organized airspace.

Source Verification

Exclusive: Anthropic's Claude AI model takes on (and beats) human hackers

Exclusive: Anthropic's Claude AI model takes on (and beats) human hackers

Summary

Anthropic's AI model, Claude, has been successfully participating in student hacking competitions, achieving high ranks with minimal human help. The AI, entered by a team member into several competitions, has solved challenges quickly, often outperforming human teams. This highlights the potential of AI in cybersecurity.

Key Facts

  • Claude is an AI model developed by Anthropic.
  • It was entered in student hacking competitions like PicoCTF.
  • Claude ranked in the top 3% of participants at PicoCTF.
  • In some competitions, Claude solved multiple challenges rapidly, placing in the top rankings.
  • AI agents like Claude are showing near-expert levels in cybersecurity tasks.
  • In the Hack the Box competition, AI teams performed better than many human teams.
  • AI challenges included reverse-engineering and system hacking but struggled with unexpected tasks.
  • There is potential for AI to significantly impact offensive and defensive cybersecurity in the future.

Source Verification

WATCH: U.S. government proposes easing some restrictions on drones traveling long distances

WATCH: U.S. government proposes easing some restrictions on drones traveling long distances

Summary

The U.S. government proposed a new rule to make it easier for companies to use drones over long distances. This rule would allow drones to operate beyond the operator's sight without needing a special waiver. The rule aims to expand the use of drones in deliveries, infrastructure inspection, and agriculture.

Key Facts

  • The new rule allows drones to fly beyond the operator's sight more easily.
  • Previously, companies needed a waiver to fly drones long distances; 657 waivers had already been approved.
  • The rule is intended to help with deliveries, infrastructure inspections, and agricultural use.
  • Michael Robbins described the rule as a key step for improving drone operations.
  • The Federal Aviation Administration (FAA) ensures drones won't disrupt aviation.
  • The rule follows President Trump's executive orders to promote drone technology.
  • Drones' uses include search and rescue, package delivery, and even in military contexts.
  • There are concerns about drones being used in terrorism, espionage, and drug smuggling.

Source Verification

Nasa to put nuclear reactor on the Moon by 2030 - US media

Nasa to put nuclear reactor on the Moon by 2030 - US media

Summary

The U.S. space agency, NASA, plans to build a nuclear reactor on the Moon by 2030. This project is part of a larger effort to establish a permanent human base on the lunar surface. NASA is seeking commercial partners to design a reactor capable of providing continuous power, which is crucial due to the Moon's unique day-night cycle.

Key Facts

  • NASA aims to put a nuclear reactor on the Moon by 2030.
  • The goal is to support a permanent human base on the Moon.
  • Continuous power is needed on the Moon because its day-night cycle lasts about four Earth weeks.
  • NASA is inviting commercial companies to propose designs for a reactor that generates at least 100 kilowatts of power.
  • There is competition from countries like China and Russia, which also plan Moon projects.
  • Concerns exist about the feasibility and motivation behind these plans, partly due to recent NASA budget cuts.
  • Safety concerns include launching radioactive materials into space.
  • NASA's plans are linked to its Artemis program, which aims to return humans to the Moon.

Source Verification

Inside AI's billion-dollar job offer lottery

Inside AI's billion-dollar job offer lottery

Summary

Big tech companies are competing to hire top AI experts with high compensation packages, sometimes offering amounts that could reach over a billion dollars. These offers mostly consist of stock options, which can fluctuate in value depending on the company's stock performance. Many top researchers prioritize their mission and research freedom over financial gain.

Key Facts

  • Tech companies are eager to hire top AI experts, offering very high compensation.
  • Mark Zuckerberg from Meta offered Andrew Tulloch a package worth up to $1.5 billion over six years.
  • These compensation packages usually involve stock options, not direct cash payments.
  • Stock options can decrease in value if the company’s stock price drops after hiring.
  • Researchers are often motivated by solving challenging problems more than by money.
  • Sam Altman of OpenAI suggests that a strong mission focus can attract top talent better than high pay.
  • In the past, similar hiring battles occurred in tech over skills like machine learning and internet networking.
  • High-tech AI work requires access to powerful and expensive computing resources.

Source Verification