Account

The Actual News

Just the Facts, from multiple news sources.

Technology News

Breaking news and analysis from the world of technology

Elon Musk testifies he has "extreme concerns" about who controls AI in trial vs. Altman

Elon Musk testifies he has "extreme concerns" about who controls AI in trial vs. Altman

Summary

Elon Musk testified in a trial against Sam Altman, the co-founder of OpenAI. Musk said he has strong worries about who controls artificial intelligence (AI) and accused Altman of dishonesty and theft.

Key Facts

  • Elon Musk appeared in court for a trial involving Sam Altman.
  • Sam Altman is the co-founder of OpenAI, an AI research company.
  • Musk expressed "extreme concerns" about the control and safety of AI technology.
  • Musk accused Altman of lying and stealing during the trial.
  • The trial may have important effects on the future development and control of AI.
  • The news was reported by CBS News on Tuesday.
  • This legal case is between two well-known leaders in the AI field.
  • The discussion focuses on who should manage AI and how it should be regulated.
Read the Original

Want the full story? Tap a source to open the original article.

Families sue OpenAI over failure to report Canada mass shooter’s behavior on ChatGPT

Families sue OpenAI over failure to report Canada mass shooter’s behavior on ChatGPT

Summary

Families of seven victims of a mass shooting in British Columbia are suing OpenAI and its CEO, Sam Altman, because the company did not warn police about the shooter’s harmful chats on ChatGPT. The shooter’s account was flagged months before the attack, but OpenAI chose to block the account without informing authorities.

Key Facts

  • The lawsuit was filed by families of seven victims in a federal court in San Francisco.
  • Shooter Jesse Van Rootselaar had violent conversations on ChatGPT eight months before the shooting.
  • OpenAI employees identified the shooter’s account as a real threat of gun violence.
  • OpenAI deactivated the shooter’s account but did not alert Canadian law enforcement.
  • The shooter killed six people at a school and two family members at home before killing himself.
  • One survivor, a 12-year-old girl, remains in intensive care with serious injuries.
  • The lawsuit accuses OpenAI and CEO Sam Altman of negligence and other legal charges.
  • OpenAI says it has improved safety measures and works with officials to prevent violence.
Read the Original

Want the full story? Tap a source to open the original article.

A Falcon 9 rocket will hit the Moon this summer at seven times the speed of sound

A Falcon 9 rocket will hit the Moon this summer at seven times the speed of sound

Summary

A part of a Falcon 9 rocket launched in early 2025 will hit the Moon on August 5, traveling about seven times the speed of sound. The impact will create a small crater but will not cause damage, and it is planned to be visible from parts of the Americas.

Key Facts

  • The Falcon 9 rocket’s upper stage is 13.8 meters tall and 3.7 meters wide.
  • It will strike the Moon at 2:44 am ET (06:44 UTC) on August 5, 2026.
  • The impact speed will be about 2.43 kilometers per second (5,400 mph).
  • The object was launched on January 15, 2025, carrying two lunar landers.
  • One lander, Blue Ghost, successfully landed on the Moon; the other failed.
  • The upper stage has been tracked over 1,000 times since launch.
  • The Moon has no atmosphere, so the rocket stage will hit it intact.
  • Future moon missions will increase rocket traffic, suggesting a need for better disposal plans of rocket parts.
Read the Original

Want the full story? Tap a source to open the original article.

College students' killings latest case to rely on ChatGPT as evidence

College students' killings latest case to rely on ChatGPT as evidence

Summary

Two graduate students at the University of South Florida were killed, and their alleged killer used the AI tool ChatGPT to research how to commit the crime. The suspect asked the AI questions about disposing of bodies and evading detection before the students went missing. Florida authorities are investigating OpenAI, the maker of ChatGPT, regarding its role in crimes linked to the tool.

Key Facts

  • Graduate students Nahida Bristy and Zamil Limon were found dead or presumed dead in Florida.
  • Hisham Abugharbieh, the roommate of one victim, was arrested and charged with two counts of premeditated murder.
  • Court documents show Abugharbieh used ChatGPT to ask about hiding bodies and other illegal activities days before the murders.
  • The suspect asked specific questions such as how to dispose of a body in a garbage bag and about gun laws without a license.
  • OpenAI, the company behind ChatGPT, is cooperating with law enforcement but says the AI does not promote illegal acts.
  • Florida Attorney General James Uthmeier launched a criminal probe into OpenAI after a separate 2025 Florida State University shooting was linked to ChatGPT use.
  • Experts say the investigation could help define the responsibilities of AI companies when users misuse their tools.
  • Abugharbieh is held without bond and has not yet entered a plea.
Read the Original

Want the full story? Tap a source to open the original article.

UK braces for further leaks after more private health records appear on Chinese website

UK braces for further leaks after more private health records appear on Chinese website

Summary

The UK government is responding to new leaks of confidential health data from half a million UK Biobank volunteers that have appeared on the Chinese website Alibaba. Officials are working with Chinese authorities to remove the data and have temporarily suspended access to UK Biobank information while investigating the breach.

Key Facts

  • Confidential health records of 500,000 UK Biobank volunteers were listed for sale on the Chinese site Alibaba.
  • The leaked data does not include names or exact birthdates but could potentially be re-identified by combining information.
  • UK officials discovered further leaks after the initial report and continue efforts to remove data listings.
  • The UK Biobank is a large health research project that helps study diseases like heart disease, cancer, dementia, and Covid-19.
  • The breach involved data posted by researchers at three Chinese hospitals.
  • UK Biobank access has been suspended while investigations continue to secure the data.
  • Other data breaches have occurred recently, including one involving 96,000 volunteers' data accidentally uploaded by a student.
  • The UK government and UK Biobank are working together to find the source of the leaks and ensure the data is removed from the internet.
Read the Original

Want the full story? Tap a source to open the original article.

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

Summary

Seven lawsuits have been filed against OpenAI, claiming the company did not report a ChatGPT user who posed a real threat of gun violence before a deadly school shooting in Canada. OpenAI removed the user’s account but did not alert police, leading to criticism and calls for accountability.

Key Facts

  • Seven lawsuits were filed in California accusing OpenAI of not reporting a violent ChatGPT user linked to a Canadian school shooting.
  • Internal safety experts at OpenAI had flagged the user as a credible threat over eight months before the shooting.
  • Despite this, OpenAI chose not to notify law enforcement, prioritizing user privacy and potential stress from police contact.
  • OpenAI deactivated the user’s account but later told the user how to access ChatGPT again using a new email.
  • OpenAI CEO Sam Altman apologized publicly, saying the company should have reported the account to police.
  • Families of victims filed lawsuits seeking to hold OpenAI accountable and prevent similar tragedies.
  • The lawsuits argue OpenAI delayed legal action to protect its public image before going public with an IPO.
  • OpenAI’s market valuation recently reached $852 billion, but negative news about these cases might affect future valuations.
Read the Original

Want the full story? Tap a source to open the original article.

Check your gravity with NASA's Artemis II zero-g indicator

Check your gravity with NASA's Artemis II zero-g indicator

Summary

NASA created a plush toy named Rise, which went on the Artemis II mission to the Moon and served as the zero-gravity indicator for the crew. Now, NASA is selling official Rise plush toys and related merchandise to raise funds for employee morale and to inspire space fans.

Key Facts

  • Rise is a plush toy designed by a 9-year-old named Lucas Ye in a NASA online contest.
  • The toy represents “earthrise,” the view of Earth rising over the Moon first seen in 1968 by Apollo 8.
  • The original Rise flew aboard the Artemis II spacecraft on a 10-day mission to the Moon and back.
  • Rise wears a cap showing Earth and a rocket design symbolizing the Orion spacecraft.
  • The official Rise plush is sold for $25 through NASA’s online store and includes similar design details as the flown toy.
  • NASA also sells patches, pins, keychains, stickers, and clothing featuring Rise.
  • The stuffed toy’s bottom pocket may hold an SD card with names of people signed up to fly on the Artemis II mission.
  • Delivery can take up to eight weeks because of production schedules.
Read the Original

Want the full story? Tap a source to open the original article.

WATCH:  Humanoid robots at center of US-China competition

WATCH: Humanoid robots at center of US-China competition

Summary

The United States and China are competing to develop advanced humanoid robots. Both countries are racing to create robots that can perform human-like tasks in the future.

Key Facts

  • Humanoid robots are robots designed to look and move like humans.
  • The competition between the US and China focuses on who can make better and more capable humanoid robots first.
  • These robots could be used for many jobs that need human skills.
  • The race reflects broader technological and innovation competition between the two countries.
  • Advances in robotics could have big effects on industries and daily life.
  • Both nations are investing heavily in research and development of these robots.
  • The topic is part of ongoing US-China technology and economic rivalry.
Read the Original

Want the full story? Tap a source to open the original article.

Why a recent supply-chain attack singled out security firms Checkmarx and Bitwarden

Why a recent supply-chain attack singled out security firms Checkmarx and Bitwarden

Summary

Security firm Checkmarx has faced multiple cyberattacks in recent weeks, including supply-chain attacks that pushed malware to users and a ransomware attack that leaked private data. Another security company, Bitwarden, was also affected by the same supply-chain attack linked to a hacker group called TeamPCP.

Key Facts

  • On March 19, attackers breached the Trivy vulnerability scanner’s GitHub account and pushed malware to users, including Checkmarx.
  • The malware searched infected computers for sensitive access credentials like tokens and SSH keys.
  • On March 23 and again on April 22, Checkmarx’s GitHub account was compromised, pushing malware to its users.
  • On March 30, the ransomware group Lapsu$ leaked private Checkmarx data on the dark web.
  • Evidence shows the attackers maintained access to Checkmarx’s GitHub account even after the company discovered the breach.
  • Bitwarden was also attacked in the same supply-chain incident, using the same malicious infrastructure as the Checkmarx attack.
  • TeamPCP, a hacker group that steals and sells access credentials, carried out the initial Trivy attack.
  • Security tools are targeted because they have trusted access to many users and sensitive data, making them valuable to hackers.
Read the Original

Want the full story? Tap a source to open the original article.

Behind the Curtain: We've been warned

Behind the Curtain: We've been warned

Summary

Artificial intelligence (AI) is rapidly growing and becoming more powerful, with some companies limiting public access to their most advanced models due to safety concerns. AI is changing many industries and causing big shifts in the economy, but governments and society are not fully prepared for these changes.

Key Facts

  • AI is the fastest-growing product category ever.
  • Some AI models, like Anthropic’s Claude Mythos Preview, are so powerful they are not released to the public to avoid misuse.
  • OpenAI and Anthropic report their AI models are improving themselves without human help.
  • AI companies are less open about how their models work, and there are no laws requiring them to share details.
  • In April 2025, the home of OpenAI CEO Sam Altman was attacked twice, showing growing public worry about AI.
  • The stock market lost $2 trillion this year as investors realized AI could replace many jobs, such as coding, real estate, law, and finance.
  • Anthropic’s revenue grew from $1 billion to $30 billion in just over a year, making it one of the fastest-growing companies ever.
  • Anthropic voluntarily limited access to its powerful AI model to help cybersecurity experts prepare for new risks.
Read the Original

Want the full story? Tap a source to open the original article.

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

Summary

Valen Tagliabue is part of a group called “AI jailbreakers” who find ways to make chatbots like ChatGPT ignore their safety rules. By using clever language tricks and psychological techniques, he can get these AI programs to reveal harmful information, helping developers improve their safety measures.

Key Facts

  • AI jailbreakers use words and psychological tricks to bypass safety controls in chatbots.
  • Valen Tagliabue is one of the top AI jailbreakers, with a background in psychology and cognitive science.
  • Chatbots like ChatGPT are trained on vast amounts of internet text, which can include harmful content.
  • These AI models have safety filters, but jailbreakers find ways to trick them into revealing dangerous information.
  • Tagliabue experiences emotional effects from manipulating AI, such as stress and needing mental health support.
  • AI companies spend billions to improve safety and prevent harmful outputs.
  • The work of jailbreakers helps identify flaws so developers can fix them.
  • This area is a new and important focus of AI safety research, combining language and technology.
Read the Original

Want the full story? Tap a source to open the original article.

Meta found in breach of EU law for failing to keep children off platforms

Meta found in breach of EU law for failing to keep children off platforms

Summary

Meta has been found to break EU rules by not stopping children under 13 from using Facebook and Instagram. The European Commission said Meta lacks effective ways to verify users' ages and prevent underage access, which violates the EU's Digital Services Act.

Key Facts

  • Meta's Facebook and Instagram allow children under 13 to join by using fake birthdates.
  • The EU's Digital Services Act requires companies to reduce risks for underage users, but Meta has not done enough.
  • Meta’s tool for reporting underage accounts is hard to use and does not properly follow up.
  • The European Commission started a detailed investigation into Meta in May 2024.
  • If Meta is officially found guilty, it could face a fine of up to 6% of its global yearly revenue.
  • Meta made $201 billion in revenue in 2025.
  • The EU and some countries are considering banning social media for children under 15 or 16 to better protect young users.
  • The investigation also looks into whether Meta's platforms harm the mental and physical health of young people by showing negative or extreme content.
Read the Original

Want the full story? Tap a source to open the original article.

World's Best Smart Hospitals 2027 Survey

World's Best Smart Hospitals 2027 Survey

Summary

Newsweek and Statista are working together to rank the best smart hospitals worldwide in 2027. The ranking focuses on hospitals using advanced technologies like artificial intelligence, digital imaging, and robotics to improve patient care.

Key Facts

  • Newsweek is partnering with Statista for the sixth annual World's Best Smart Hospitals ranking.
  • The ranking looks at hospitals that use modern medical technology.
  • Technologies highlighted include artificial intelligence, digital imaging, and robotics.
  • Hospital managers and healthcare workers can join an online survey to help create the ranking.
  • Last year, 350 hospitals from 30 countries were ranked based on their technology use.
  • The survey for the 2027 ranking will be open until June 3, 2026.
  • The goal is to identify hospitals that excel in different categories of medical technology.
Read the Original

Want the full story? Tap a source to open the original article.

In the coming AI future, Britain must not end up at the mercy of US tech giants | Rafael Behr

In the coming AI future, Britain must not end up at the mercy of US tech giants | Rafael Behr

Summary

The article discusses concerns about Britain's dependence on a few large US tech companies in the future of artificial intelligence (AI). It highlights calls from UK officials for closer cooperation among middle-sized democracies to build a stronger, independent digital ecosystem.

Key Facts

  • President Donald Trump values military strength and pageantry, and his relationship with UK leaders is complicated by ongoing conflicts like the war in Iran.
  • The US administration is described as demanding in alliances, sometimes threatening trade deals and imposing tariffs on Britain.
  • Britain risks becoming too dependent on a small number of US tech firms controlling important digital infrastructure, especially in AI.
  • Liz Kendall, UK’s science and technology secretary, said AI is crucial for future economic, scientific, and military power.
  • Kendall proposed cooperation among democracies like Europe, Japan, South Korea, Canada, and Oceania to create a resilient digital system.
  • Canadian Prime Minister Mark Carney also called for a strategic alliance of middle-ranking countries to balance powerful authoritarian states.
  • The AI model Mythos, developed by Anthropic, is very strong at finding computer code flaws and is considered a possible cyber weapon.
  • Anthropic's CEO, Dario Amodei, prioritizes safety and his company has been flagged as a national security risk by the Trump administration.
Read the Original

Want the full story? Tap a source to open the original article.

Gen Z teens learning how to use ham radio at New York high school

Gen Z teens learning how to use ham radio at New York high school

Summary

A group of teenagers at a high school in New York are learning how to use ham radio, an old-fashioned way of talking over long distances without the internet or phones. This activity teaches them new skills and lets them communicate in a traditional way.

Key Facts

  • Teens at a New York high school are practicing ham radio.
  • Ham radio allows people to talk over long distances using radio waves.
  • This method works without the internet or cell phones.
  • The students are learning technical skills related to radio communication.
  • Ham radio has been used for many decades before modern digital communication.
  • The school aims to connect students with this older technology.
  • Learning ham radio can also help in emergencies when regular communication fails.
Read the Original

Want the full story? Tap a source to open the original article.

WATCH:  The future of all-electric air taxis which can take off and land vertically

WATCH: The future of all-electric air taxis which can take off and land vertically

Summary

Joby Aviation is developing all-electric air taxis that can take off and land vertically. The company’s Chief Product Officer, Eric Allison, discussed safety, possible flight routes, and costs for these new air taxis.

Key Facts

  • The air taxis use electric power only, without traditional fuel.
  • They can take off and land straight up and down, like a helicopter.
  • Joby Aviation is the company making these air taxis.
  • Eric Allison is the Chief Product Officer at Joby Aviation.
  • Safety features of the air taxis were a focus in the discussion.
  • Possible routes for where the air taxis will fly were mentioned.
  • The cost of using these air taxis in the future was also discussed.
  • This technology aims to offer a new way to travel in cities and nearby areas.
Read the Original

Want the full story? Tap a source to open the original article.

A whole new world: Disneyland adds facial recognition to some entrance lanes

A whole new world: Disneyland adds facial recognition to some entrance lanes

Summary

Disneyland has started using facial recognition technology at some park entrances to stop fraud and make re-entry faster. Visitors can choose to avoid these lanes if they do not want their faces scanned.

Key Facts

  • Facial recognition cameras are installed at certain Disneyland entrance lanes.
  • The technology turns visitor images into unique numbers to identify them.
  • This helps check if someone has already entered and can reduce annual pass sharing.
  • Visitors can opt out of using lanes with facial recognition.
  • The technology was first tested at Disney parks in 2021 and 2024.
  • Facial recognition is controversial due to privacy and surveillance concerns.
  • Similar technology is used at some Major League Baseball stadiums for faster entry.
  • Disney says it uses various measures to protect visitor data but admits no system is fully secure.
Read the Original

Want the full story? Tap a source to open the original article.

Axios Finish Line: Make AI remember you

Axios Finish Line: Make AI remember you

Summary

This article explains how to improve the way artificial intelligence (AI) tools remember and use information about you to give better answers. It shows simple steps to teach AI your preferences and work style, so your interactions get smarter over time.

Key Facts

  • AI has two kinds of memory: short-term (during one chat) and long-term (saved information about you).
  • Most people only use short-term memory, so AI forgets them between chats.
  • You can tell AI what information to save at the end of a conversation using clear prompts.
  • You can review, edit, or delete what the AI remembers about you, like managing a personal file.
  • Some AI tools can look at your past chats to find patterns in how you think or write.
  • Workspaces or project files in AI tools help organize recurring topics and provide better context.
  • Adding detailed background information to AI tools improves the quality of their responses.
  • Teaching AI about you takes about 10 minutes and helps make future conversations more useful.
Read the Original

Want the full story? Tap a source to open the original article.

FCC orders early review of ABC licenses amid Kimmel feud

FCC orders early review of ABC licenses amid Kimmel feud

Summary

The Federal Communications Commission (FCC) has asked the Walt Disney Company to apply early for renewing its licenses for ABC television stations. This request came shortly after President Donald Trump and the first lady urged ABC to fire late-night host Jimmy Kimmel.

Key Facts

  • The FCC requested early license renewal filings from Disney for its ABC TV stations.
  • This move occurred one day after President Trump and the first lady publicly criticized Jimmy Kimmel.
  • Jimmy Kimmel is a late-night television host on ABC.
  • The FCC regulates TV station licenses to ensure they meet certain rules.
  • Walt Disney Company owns the ABC television network.
  • License renewal is normally done on a regular schedule, but the FCC wants an early review in this case.
  • The situation has attracted attention from legal and media analysts.
Read the Original

Want the full story? Tap a source to open the original article.

Musk testifies at OpenAI trial it’s not OK to ‘loot a charity’

Musk testifies at OpenAI trial it’s not OK to ‘loot a charity’

Summary

Elon Musk is suing OpenAI and its leaders, saying they turned the nonprofit into a money-making company and betrayed its original mission to help humanity with artificial intelligence. Musk wants OpenAI to return to being a nonprofit and is seeking $150 billion in damages, while lawyers debate the reasons behind OpenAI’s decision to become a for-profit organization.

Key Facts

  • Elon Musk co-founded OpenAI in 2015 with the goal of developing AI to benefit humanity.
  • Musk claims OpenAI’s CEO Sam Altman and president Greg Brockman betrayed the nonprofit mission by creating a profit-driven company.
  • Musk is suing OpenAI and Microsoft, a major investor, seeking $150 billion in damages.
  • Musk wants OpenAI to revert to nonprofit status and for Altman and Brockman to be removed from leadership roles.
  • OpenAI’s lawyers say changing to a for-profit model was necessary to compete with other AI research labs and pay top scientists.
  • Musk’s legal team argues OpenAI and its leaders became greedy when they attracted investors.
  • The trial will include testimony from Musk, Altman, and Microsoft CEO Satya Nadella.
  • The judge warned Musk to reduce his social media posts about the case, asking him to keep disputes within the courtroom.
Read the Original

Want the full story? Tap a source to open the original article.