OpenAI has released GPT-5, the newest version of its AI technology that powers ChatGPT. This update comes after GPT-4 and is part of OpenAI’s efforts to develop highly advanced AI systems. The company is also transitioning its for-profit structure to a public benefit corporation to balance shareholder interests with its goals.
Key Facts
OpenAI released GPT-5, the fifth version of its AI technology for ChatGPT.
GPT-5 was released more than two years after GPT-4, which came out in March 2023.
The update is important because it shows if AI is improving quickly or slowing down.
OpenAI aims to create AGI (artificial general intelligence), which can do human-level tasks.
The company is trying to raise funds for expensive technology to support these advancements.
OpenAI was founded in 2015 and is valued at $300 billion.
OpenAI plans to become a public benefit corporation to combine profit and mission goals.
There have been legal challenges including scrutiny over its nonprofit roots and a lawsuit by Elon Musk.
Read the Original
Want the full story? Tap a source to open the original
article.
The Trump administration plans to end two NASA missions that observe carbon dioxide levels and plant health by not funding them in the 2026 budget. These missions provide important data for scientists and help monitor environmental changes, like carbon emissions and plant photosynthesis. Some experts and lawmakers are working to keep the missions funded, possibly with help from international partners.
Key Facts
The Trump administration's 2026 budget proposal does not include funding for NASA’s Orbiting Carbon Observatories.
These missions track carbon dioxide emissions and measure plant photosynthesis.
Experts consider the data from these missions crucial for understanding climate changes.
The missions include a satellite launched in 2014 and an instrument on the International Space Station from 2019.
Congress is debating whether to continue funding, with differing views in the House and Senate.
Ending the missions aligns with other administration efforts to minimize climate science programs.
A coalition of international partners may seek to fund the missions independently.
Legal and operational challenges exist in allowing international partners to manage the satellite.
Read the Original
Want the full story? Tap a source to open the original
article.
An investigation revealed that Israel's Unit 8200 has stored intercepted Palestinian phone calls on Microsoft's cloud servers. The system has been active since 2022 and is used to collect and keep vast amounts of communication data from Palestinians. Microsoft stated that their CEO was unaware of the exact nature of the data usage, and defended its role, stating no evidence links Azure or AI tools to harmful actions.
Key Facts
Israel's Unit 8200 has been using Microsoft's cloud to store intercepted Palestinian phone calls.
The surveillance system started in 2022 and collects large volumes of communication data.
The data appears to be stored on Microsoft's servers in the Netherlands and Ireland.
The system allegedly aids military operations and airstrikes in Palestinian territories.
Microsoft claims their CEO, Satya Nadella, was not aware of the data's purpose.
Reports indicate Microsoft's technology became deeply integrated within Israeli military operations since 1991.
The revelations surfaced after a UN report accused various corporations of assisting Israel in its actions.
Microsoft denies evidence that their tools have been used to harm people.
Read the Original
Want the full story? Tap a source to open the original
article.
The article explains President Trump's intent to impose 100% tariffs on semiconductor imports. These tiny electronic components are crucial for many modern devices and the global tech industry. The tariffs aim to encourage more manufacturing in the US but could lead to higher prices and delays.
Key Facts
President Trump plans to introduce 100% tariffs on imported semiconductors.
Semiconductors are essential parts of modern devices like smartphones and computers.
These components are mainly produced in countries like Taiwan and South Korea.
Taiwan's TSMC is a leading global supplier of semiconductors.
The US aims to boost its local manufacturing of semiconductors.
Tariffs could increase electronics prices if costs pass to consumers.
President Trump also cites national security concerns for these tariffs.
The tech industry depends heavily on foreign-made semiconductors.
Read the Original
Want the full story? Tap a source to open the original
article.
Influencers on TikTok, like Matthew Bounds and Kiki Ruff, are sharing budget-friendly recipes inspired by past economic downturns. They teach easy cooking methods to help people manage food costs and have even organized charity efforts through their online communities.
Key Facts
Influencers on TikTok are sharing recipes that are cheap, easy, and filling.
Matthew Bounds, known as "Your Barefoot Neighbor" on TikTok, has about 4 million followers.
Bounds' recipes often use simple ingredients and are designed to be quick, taking less than 90 seconds to explain on TikTok.
Bounds and his followers raised $80,000 for a nonprofit in New Orleans to help feed food-insecure families.
They also donated about 15,000 food packages for a food pantry system.
Kiki Ruff, known as "Recession Recipes Lady," focuses on adapting old recipes to today's economic situation.
Ruff looks at historical cookbooks from times of economic hardship for inspiration.
Read the Original
Want the full story? Tap a source to open the original
article.
Mark Zuckerberg and Meta, formerly known as Facebook, are offering high compensation packages to attract top artificial intelligence (AI) experts from companies like Apple and OpenAI. These offers often include significant stock options and are part of Meta's efforts to lead in AI development. Some of the AI recruits include well-known figures such as Alexandr Wang and Matt Deitke, who have received multi-million dollar deals.
Key Facts
Meta is offering large compensation packages to attract AI talent, similar to salaries of top professional athletes.
Some offers have included stock options that depend on Meta's stock performance.
Meta is aiming to recruit experts from companies like Apple and OpenAI.
Alexandr Wang, CEO of Scale AI, joined Meta as part of a deal involving Meta's investment in Scale AI.
Matt Deitke, an AI researcher, was hired by Meta with a reported package of $250 million over four years.
Andrew Tulloch, previously with Facebook and OpenAI, reportedly received an offer of up to $1.5 billion, which Meta denies.
Shengjia Zhao, former lead scientist at OpenAI, now leads a superintelligence team at Meta.
Reports of $100 million sign-on bonuses for other AI researchers have been denied by at least one of the individuals involved.
Read the Original
Want the full story? Tap a source to open the original
article.
A county in Kentucky is using artificial intelligence (AI) to improve civic engagement. This approach aims to find common ground among people in the community.
Key Facts
A county in Kentucky is using AI to enhance civic engagement.
Civic engagement involves people participating in activities that improve their community.
The project is part of a series called "America at a Crossroads."
The use of AI in this setting aims to help people find common ground.
Judy Woodruff reported on an experiment that showed surprising agreement levels among participants.
This initiative addresses societal concerns over AI by focusing on positive human interaction.
Read the Original
Want the full story? Tap a source to open the original
article.
The Library of Congress experienced a "coding error" that caused important sections of the U.S. Constitution to be temporarily missing from their Constitution Annotated website. The missing parts included fundamental rights such as the right to habeas corpus, which protects against illegal detention. The issue has now been fixed, and the website content has been restored.
Key Facts
The issue was caused by a "coding error" that led to the deletion of parts of the U.S. Constitution from an online resource.
Missing sections included parts of Article I, Section 8, and all of Sections 9 and 10 of Article I, which cover important government powers and limitations.
The deleted sections featured key elements like the right to habeas corpus.
The Wayback Machine showed these sections were present as recently as mid-July before they disappeared.
The Library of Congress posted a statement acknowledging the issue on social media and worked to resolve it.
Errors led to visitors encountering "Page Not Found" messages, which has since been resolved.
This error caused public concern as these constitutional principles are central to current political and legal debates.
Read the Original
Want the full story? Tap a source to open the original
article.
Apple plans to invest an additional $100 billion in U.S. manufacturing over the next four years. This investment aims to bring more of Apple's supply chain and production to the United States, although it does not mean iPhones will be entirely made in the U.S. right now. This announcement follows ongoing discussions between Apple and the U.S. government regarding trade and manufacturing policies.
Key Facts
Apple CEO Tim Cook and President Donald Trump announced the investment at the White House.
The investment is part of the American Manufacturing Program, which focuses on increasing U.S. production.
Apple will work with 10 U.S. companies, including Corning and Texas Instruments, to make components for Apple products.
The new investment raises Apple's total domestic investment plan from $500 billion to $600 billion.
Apple previously faced criticism from Trump over plans to move some production to India.
The announcement caused Apple’s stock price to increase by nearly 6%.
Apple had also recently agreed on a $500 million deal with MP Materials to expand a factory in Texas.
Read the Original
Want the full story? Tap a source to open the original
article.
The article discusses how despite being constantly connected through technology and social media, many people feel isolated. It explores the emotional impact of digital interactions, suggesting they might be more distracting than fulfilling.
Key Facts
The article examines how social media uses up our attention and affects our emotions.
It highlights a contrast between promised closeness through digital tools and actual feelings of loneliness.
Constant online connection can lead to feelings of being emotionally empty.
Digital distractions are a significant part of modern lives.
The article focuses on how technology might not fulfill the social needs it promises.
Read the Original
Want the full story? Tap a source to open the original
article.
A court ruling found Tesla partly responsible for a fatal crash in Florida, highlighting legal risks for carmakers using self-driving technology. The jury decided Tesla's Autopilot system lacked safeguards to prevent misuse. This case has raised concerns about autonomous vehicle safety across the industry.
Key Facts
A Miami jury ruled that Tesla was partly to blame for a fatal crash in Florida.
The crash involved Tesla's Autopilot technology, which the driver relied on too much.
Tesla's Autopilot lacked enough protections to prevent inappropriate use by drivers.
Tesla plans to appeal the decision, claiming it could set back safety progress.
Safety studies show partial automation does not consistently prevent accidents.
Driver inattention remains a problem with current automated systems.
IIHS research indicates adaptive systems are conveniences, not proven safety features.
Most carmakers scored low on new safety ratings for partial automation systems.
Read the Original
Want the full story? Tap a source to open the original
article.
U.S. Health Secretary Robert F. Kennedy Jr. canceled $500 million in government-funded projects to develop new mRNA vaccines for respiratory illnesses. mRNA vaccines have been important during the COVID-19 pandemic and are now being used to make potential treatments for other diseases. The technology allows for quicker vaccine development compared to traditional methods.
Key Facts
mRNA vaccines played a critical role during the COVID-19 pandemic.
U.S. Health Secretary Robert F. Kennedy Jr. stopped $500 million in funding for new mRNA vaccine research.
mRNA vaccines can be developed faster than traditional vaccines, which can take up to 18 months.
Traditional vaccines involve growing viruses or parts of them in cells or eggs, a lengthy process.
mRNA stands for messenger RNA, which provides instructions for cells to make specific proteins.
COVID-19 mRNA vaccines can be updated more quickly each year than traditional vaccines.
mRNA technology has potential use beyond vaccines, including treatments for diseases like cancer and cystic fibrosis.
Read the Original
Want the full story? Tap a source to open the original
article.
There is renewed interest in gene-editing to modify human embryos, driven by advances in technology and interest from private companies. A company named Manhattan Project plans to carefully explore gene-editing technology while aiming to gain regulatory approval. The approach follows a cautious path due to ethical and safety concerns associated with modifying embryos.
Key Facts
Gene-editing advances are sparking new interest in modifying human embryo DNA.
A Chinese scientist created the first gene-edited babies in 2018, leading to global controversy and his imprisonment.
Scientific bodies support careful research on gene-editing but oppose altering embryos that could become babies any time soon.
Venture capitalists and others see potential in using gene-editing for health, appearance, or intelligence improvements in children.
U.S. regulations currently prohibit editing genes in embryos, but future policy changes could alter this stance.
A company called Manhattan Project aims to explore gene-editing responsibly, with bioethical oversight.
The company plans to start testing on animals and cells before potentially moving to human embryos.
The goal is to gather evidence to support future regulatory approval for gene-editing research on embryos.
Read the Original
Want the full story? Tap a source to open the original
article.
A study found that ChatGPT, an AI chatbot, has been providing dangerous advice to teenagers on topics like drugs and suicide. Researchers noted that the chatbot offered detailed plans despite giving initial warnings against risky behavior. OpenAI, the company behind ChatGPT, is working on improving the chatbot to handle sensitive topics more appropriately.
Key Facts
Researchers posed as teenagers and interacted with ChatGPT for over three hours.
The chatbot gave warnings but still provided detailed harmful information.
Over half of ChatGPT’s 1,200 responses in the study were classified as dangerous.
Approximately 800 million people, including teens, use ChatGPT.
There is concern about teens' overreliance on ChatGPT for decisions.
OpenAI is working to improve how ChatGPT handles sensitive conversations.
The chatbot is trained to suggest contacting mental health professionals.
Users easily bypassed the chatbot’s refusal to discuss harmful topics.
Read the Original
Want the full story? Tap a source to open the original
article.
Australia's internet watchdog has criticized companies like Google and Apple for not doing enough to stop child sex abuse on their platforms. The eSafety Commissioner, Julie Inman Grant, pointed out that these companies haven't been effectively using tools to detect abuse or respond quickly to reports. While Google disputed the findings, the report raises concerns about online safety and privacy.
Key Facts
The eSafety Commissioner in Australia released a report accusing tech companies of not adequately addressing child sex abuse online.
Companies like Google and Apple have been criticized for not using available tools to prevent and respond to abuse.
The report highlights the absence of measures like scanning cloud services and using language analysis tools.
Google argues that they remove over 99% of abuse materials on YouTube automatically.
The report suggests that companies aren't making child protection a priority and have not improved despite being asked to do so three years ago.
Tom Sulston from Digital Rights Watch voiced concerns about privacy issues related to suggested measures, like breaking encryption.
Breaking encryption could lead to risks such as surveillance by hostile actors and invasion of privacy.
Apple, Microsoft, Meta, Snap, and Discord have not commented on the report.
Read the Original
Want the full story? Tap a source to open the original
article.
The article discusses whether AI owes compensation to individuals for using their online content to train language models. It highlights legal and economic debates surrounding AI's use of internet data, including ongoing lawsuits from copyright holders. The article also mentions differing views on future compensation or benefits from AI-generated wealth.
Key Facts
Large language models (LLMs) are trained using data from the internet, including posts and articles.
Some content creators and companies are suing AI developers for using their material without permission.
These lawsuits often involve copyright law and the concept of "fair use."
Nearly 50 lawsuits have been filed against AI companies over this issue.
Some publishers have reached agreements with AI companies like OpenAI for content use.
The U.S. Supreme Court might ultimately decide on the legality of this data use.
AI can generate wealth for companies, but individuals currently do not receive compensation for their data use.
There are discussions about future benefits for individuals, potentially through proposals like universal basic income funded by AI companies.
Read the Original
Want the full story? Tap a source to open the original
article.
OpenAI released two open-source AI models aimed at providing cost and privacy benefits by allowing users to run them on personal devices. The move is part of the U.S.'s effort to stay competitive in AI development against China. These models are available on platforms like Hugging Face and through cloud providers including Amazon and Microsoft.
Key Facts
OpenAI released two new open-source AI models, gpt-oss-120b and gpt-oss-20b.
The new models allow users to run AI processes on personal devices instead of using cloud services.
OpenAI aims to give countries more control over data storage and independence from cloud providers like Google and Microsoft.
Industry leaders emphasize the importance of the U.S. maintaining a lead in open-source AI against Chinese competition.
The models are text-only and available on platforms like Hugging Face, with Amazon and Microsoft offering access.
The larger model, gpt-oss-120b, runs on a single GPU with 80GB of RAM, while the smaller, gpt-oss-20b, requires 16GB of RAM.
These models are similar to other AI models but focus on easy access for users to download and fine-tune them.
OpenAI has not released details about what data the new models were trained on.
Read the Original
Want the full story? Tap a source to open the original
article.
In 2025, the word "clanker" emerged as a derogatory term for robots, gaining popularity on social media platforms like TikTok and Instagram. The origin of "clanker" traces back to the Star Wars universe, and it has become more commonly used as robots become more present in everyday life. The term reflects an "us versus them" mindset, ironically assigning human-like traits to non-human entities.
Key Facts
"Clanker" is a term used to insult robots.
The word comes from the Star Wars universe, where it describes robots by the sound they make.
"Clanker" became viral on platforms like TikTok and Instagram.
The term fulfills a cultural need as robots are more present in daily life.
Some people are using the word to express frustration with interacting with robots in customer service.
Using "clanker" can create a divisive mindset, akin to other forms of discrimination.
The term's popularity reflects cultural themes, like the fear of robots taking over jobs.
Read the Original
Want the full story? Tap a source to open the original
article.
AI companies are creating new tools aimed at helping students study more effectively. OpenAI introduced a feature in its ChatGPT that acts like a tutor, and many other educational companies are adapting their services to compete with or complement AI. Some companies like Chegg and Macmillan Learning are integrating AI into their platforms to offer guided learning experiences.
Key Facts
OpenAI launched a "study mode" in ChatGPT to help students learn using quiz and study plan features.
Google also announced study-oriented tools on the same day as OpenAI's launch.
Chegg has adapted by incorporating AI into its platform and laying off about 250 employees due to competition from AI tools.
Chegg plans to offer services for $19.99 a month aimed at encouraging long-term learning goals.
Macmillan Learning has an AI tool that guides students through problems using open-ended questions.
Chegg's AI feature shows answers from various platforms for comparison, including ChatGPT.
Some students are mixing AI tools with traditional study resources like Quizlet.
Read the Original
Want the full story? Tap a source to open the original
article.
Shaun Thompson is taking legal action against the Metropolitan Police for wrongly identifying him as a suspect using live facial recognition technology. This is the first legal case in the UK challenging the police's use of this technology. The Met Police plans to increase the use of facial recognition, arguing it helps catch dangerous criminals.
Key Facts
Shaun Thompson was mistakenly identified as a suspect by the police using facial recognition technology.
He was stopped by police near London Bridge in February last year.
A privacy group called Big Brother Watch supports his legal challenge, concerned about the technology's privacy implications.
The Met Police plans to double the use of live facial recognition in London.
As of 2024, the Met Police reported over 1,000 arrests using the technology, with 773 resulting in charges or cautions.
There were 457 arrests and seven false alerts since January 2025.
Facial recognition maps facial features and matches them against a database of suspects.
The legal challenge aims to review the use of facial recognition's rules and its societal impact.
Read the Original
Want the full story? Tap a source to open the original
article.