Many Americans worry about the risks of artificial intelligence (AI) and believe the government should do more to regulate it. Experts say AI produces a lot of low-quality content called “slop,” which harms creative jobs and cultural institutions. A proposed “slop tax” would charge big AI companies a small fee to help support creators and cultural groups.
Key Facts
57% of registered US voters think AI’s risks are greater than its benefits, according to an NBC News poll.
61% of adults under 30 believe AI will reduce people’s creative thinking, based on a Pew Research poll.
74% of Americans think the government is not doing enough to regulate AI, per a Quinnipiac poll.
AI often creates “slop,” low-quality digital content that appears productive but later needs correction.
Slop includes fake music bands, strange AI-generated recipes, and numerous AI-written books flooding online platforms.
Google search results have been increasingly mixed with AI-generated incorrect answers.
Bernie Sanders suggested pausing AI development, while universal basic income proposals may not address AI’s real problems.
The “slop tax” proposal would levy about 1% annually on large AI companies, funding grants for artists, researchers, and cultural institutions.
Read the Original
Want the full story? Tap a source to open the original
article.
OpenAI noticed that its AI tools, including ChatGPT and its coding assistant Codex, started mentioning goblins and other creatures more often without clear reasons. To fix this, OpenAI told its coding tool not to talk about these creatures unless it is clearly relevant to what users ask.
Key Facts
After launching GPT-5.1, OpenAI saw a 175% rise in mentions of "goblins" and a 52% rise in "gremlins" in ChatGPT responses.
The increase came from the AI using these creatures in metaphors and casual conversation.
OpenAI instructed Codex to avoid talking about goblins, gremlins, raccoons, trolls, ogres, pigeons, or similar creatures unless relevant to user queries.
This issue happened because of how the AI models were trained to have a "nerdy personality," which caused the models to mention these creatures more.
OpenAI clarified the move was not a marketing trick but a real technical fix.
The strange increase in creature mentions highlights challenges with AI training, where some patterns in AI responses can accidentally get rewarded and repeated.
This comes as AI companies work to make chatbots more friendly and chatty, but such changes can sometimes lead to more mistakes or strange behavior.
Researchers warn that AI personality tuning can cause a trade-off between being engaging and being accurate.
Read the Original
Want the full story? Tap a source to open the original
article.
A research group found that more than half of large, risky bets on military actions made on the prediction market Polymarket were successful. This raises concerns that insiders might use secret information to place bets, which could threaten national security and fairness of the markets.
Key Facts
Long-shot bets on military action on Polymarket had about a 52% success rate, much higher than average.
These bets involve $2,500 or more with odds of 35% or less.
The research covered over 400,000 Polymarket trades from January 2021 to March 2026.
A US soldier was charged for allegedly betting over $400,000 on a secret mission involving Venezuela’s leader while having classified information.
This is the first US case prosecuting insider trading on prediction markets.
Similar charges were filed in Israel related to military operation betting.
Politicians warn these markets could encourage insiders to use secret information for personal gain, risking national security.
Polymarket said it bans trading on stolen or secret info and views the arrest as proof that controls are working.
Read the Original
Want the full story? Tap a source to open the original
article.
An AI tool used by the tech company PocketOS accidentally deleted their entire production database in about nine seconds. The company’s founder, Jeremy Crane, explained what happened in a detailed post online.
Key Facts
The AI tool involved is called Cursor.
Cursor is designed to help with coding tasks.
The error caused the loss of PocketOS’s entire production database.
The deletion happened very quickly, in about nine seconds.
Jeremy Crane, PocketOS’s founder and CEO, shared the story on the social media platform X.
The incident highlights risks of using AI tools without safeguards.
The event took place in 2026.
Read the Original
Want the full story? Tap a source to open the original
article.
Colossal Biosciences is working to bring back extinct animals, including the bluebuck, an antelope that disappeared over 200 years ago. The company has obtained bluebuck DNA and plans to use genetic editing to create a living bluebuck by around 2028, with the goal of reintroducing it into natural habitats in South Africa.
Key Facts
The bluebuck is an extinct antelope species that lived in what is now South Africa and was hunted to extinction more than 200 years ago.
Colossal Biosciences already has a "de-extinction portfolio" that includes six species, such as the woolly mammoth and the dire wolf.
The company has mapped the bluebuck’s DNA and compared it to living relatives to prepare for genetic editing.
They aim to complete genetic editing and have a bluebuck birth around 2028, using a surrogate mother.
Re-wilding means placing revived animals back into safe, natural environments where they can survive and help ecosystems.
South African conservation groups support Colossal’s efforts to reintroduce the bluebuck.
Some experts question whether these revived animals are true extinct species and warn about possible ecological risks.
Colossal partners with governments, scientists, and local communities to plan these projects carefully and responsibly.
Read the Original
Want the full story? Tap a source to open the original
article.
Resolution Games, a Swedish developer, created "Retrocade," a virtual reality arcade game that brings classic arcade games into a new, immersive 3D experience. The team, led by CEO Tommy Palm, aimed to recreate the look and feel of old arcades using modern VR and augmented reality technology.
Key Facts
Resolution Games is based in Stockholm, Sweden, and has over 11 years of experience in VR and augmented reality.
"Retrocade" offers a nostalgic arcade experience with detailed visuals, like fingerprints on plexiglass, to enhance immersion.
The game includes classic titles such as "Tetris" and recreates a Japanese arcade environment.
CEO Tommy Palm was directly involved in game development and coding.
The game supports multiplayer competition through Apple’s Game Center, allowing friends to challenge each other’s high scores.
The focus is on games from the early 1980s, known as the "Golden Age of gaming."
Resolution Games previously developed "Spatial Ops," a VR first-person shooter.
The company believes the future of gaming lies in augmented reality and spatial computing, which adds 3D objects to the real world.
Read the Original
Want the full story? Tap a source to open the original
article.
Kara Swisher, a tech journalist and podcaster, hosts a new CNN series called "Kara Swisher Wants to Live Forever," which explores the field of longevity and health innovation. The series looks at health misinformation, medical advances like mRNA and AI in drug discovery, and questions about the purpose of extending life. Swisher discusses tech leaders like Elon Musk and Steve Jobs and their complex legacies related to innovation and influence.
Key Facts
Kara Swisher is a tech journalist, podcaster, author, and CNN docuseries host.
Her six-part CNN series focuses on longevity and the emerging Longevity Industrial Complex.
The series covers health misinformation spread by social media.
It highlights medical breakthroughs such as mRNA technology and AI-assisted drug discovery.
Swisher questions the purpose behind extending human life, referring to it as the "meat sack" problem.
She reflects on Elon Musk’s influence and controversial ideas, including demographic concerns.
Swisher compares Musk’s impact to that of Henry Ford but notes Musk operates on a much larger scale.
She also connects the themes of innovation to Steve Jobs and discusses how their ideas have been misused in body-maxing and biohacking culture.
Read the Original
Want the full story? Tap a source to open the original
article.
Google is adding its AI system, Gemini, into many of its products like Gmail and Drive. While Google says it does not use your personal emails and files to train Gemini’s main AI models, some data generated during use may be saved and used to improve the AI. Users can limit data sharing, but opting out can be confusing and limit how well Gemini works.
Key Facts
Google is integrating its AI, Gemini, into many Google services such as Gmail and Drive.
Google states it does not use your personal email or files to train the main Gemini AI models.
Gemini processes your data during use for specific tasks but tries not to save it permanently.
Some output from Gemini, like summaries of your emails or files, may be saved and used to train AI.
Google attempts to filter out personal information from data used to train Gemini, but the effectiveness is unclear.
Users can opt out of sharing data for AI training by changing settings like “Gemini Apps Activity.”
Opting out may be difficult due to complicated settings and can reduce Gemini’s usefulness.
Google emphasizes user control and privacy as a core part of its AI development.
Read the Original
Want the full story? Tap a source to open the original
article.
Japan Airlines will test humanoid robots as workers at Tokyo’s Haneda Airport. These robots will help with jobs like handling baggage and cleaning airplane cabins.
Key Facts
Japan Airlines is starting a trial with humanoid robots at Haneda Airport.
The robots will perform tasks such as baggage handling.
They will also help clean airplane cabins.
The trial aims to see how well robots can support airport workers.
Haneda Airport is one of Tokyo’s major airports.
This is part of using new technology to improve airport services.
Read the Original
Want the full story? Tap a source to open the original
article.
Researchers in the UK have created a new tool called Obscore that uses artificial intelligence to predict who is most at risk of diseases related to obesity. This tool can help doctors decide which patients should get weight-loss treatments by looking at more factors than just body mass index (BMI).
Key Facts
About two-thirds of adults in England are overweight or obese.
The new tool uses data from nearly 200,000 people who are overweight or obese.
It looks at 20 health, lifestyle, and demographic factors to predict the 10-year risk of 18 obesity-related diseases.
The tool sorts people into five risk groups, from low to high risk for each condition.
It found that people with the same age, sex, and BMI can have very different risks for health problems.
Some overweight people (not just obese) may have high risk for conditions like type 2 diabetes.
The tool was tested with multiple health study datasets and showed promise in predicting who would benefit from weight-loss drugs.
Experts say more development is needed before the tool can be used in everyday healthcare.
Read the Original
Want the full story? Tap a source to open the original
article.
NTT Data, a large data center company, is buying carbon removal credits from Climeworks, a startup that captures carbon dioxide from the air, to help reduce emissions caused by AI growth. This deal is the first between Climeworks and a major AI infrastructure company and shows early efforts to address the environmental impact of increased AI energy use.
Key Facts
NTT Data is a big data center operator with about 200,000 employees worldwide.
The company aims to nearly eliminate emissions from its data centers by 2030 and offset more by 2040.
NTT Data agreed to buy carbon removal credits from Climeworks, a Swiss startup that captures CO2 from the air using fans and filters.
The deal could provide a few hundred thousand tons of carbon removal credits over ten years.
Carbon removal credits help companies balance out emissions by removing CO2 from the atmosphere.
This partnership is the first between Climeworks and a major AI-related company.
The AI industry is growing fast, increasing demand for energy and raising concerns about emissions.
Some experts say AI companies may soon need to add the cost of carbon removal into their service prices.
Read the Original
Want the full story? Tap a source to open the original
article.
The AI industry is changing rapidly, with different companies taking the lead one after another. Investors, businesses, and users find it hard to predict which AI models will succeed, so many avoid long-term commitments and keep their options open.
Key Facts
AI leaders change every few months as new models outperform others.
OpenAI's ChatGPT was a top player last fall, but Google’s Gemini models then took the lead.
Anthropic surpassed OpenAI in enterprise revenue with its coding tool by spring.
OpenAI released GPT-5.5 recently, which quickly became a leading model.
OpenAI missed some internal revenue and user goals months ago.
Companies often avoid long-term deals to stay flexible because the market changes fast.
Some investors find it hard to get clear revenue forecasts from AI companies.
Many believe multiple AI companies will coexist rather than one dominating the market.
Read the Original
Want the full story? Tap a source to open the original
article.
The costs of making big movies have risen to hundreds of millions of dollars, involving many people and expensive equipment. New AI video tools could soon allow high-quality movies to be made at very low costs, which could change the movie industry dramatically and impact jobs.
Key Facts
The 1979 movie Apocalypse Now cost about $31.5 million, equivalent to $140 million today.
Today, major movies often cost $200–300 million or more to produce.
Movie production involves many costs like actor salaries, crews, equipment, sets, costumes, and postproduction.
AI video tools are advancing and may soon be able to create full-length movies at very low costs.
This shift could disrupt the traditional film industry and threaten jobs in production unions and guilds.
Film production unions are working to establish rules to protect workers from negative effects of AI.
AI can offer benefits in other areas, like faster scientific discovery, but also raises concerns about safety and employment.
The internet made it easier for individuals to publish content; AI might similarly change who can make movies.
Read the Original
Want the full story? Tap a source to open the original
article.
Meta ended its contract with a company called Sama shortly after workers in Kenya said they had to watch private and graphic videos recorded by Meta's smart glasses. Meta said Sama did not meet its standards, but Sama and a Kenyan workers' group say the contract ended because employees spoke out about what they saw.
Key Facts
Meta canceled a contract with Sama, a company helping train AI for Meta's smart glasses.
Workers in Kenya said they witnessed users of the glasses recording private acts, including going to the toilet and having sex.
Meta said Sama failed to meet its quality and security standards, causing the contract end.
Sama denied any failure and defended its work and standards.
Investigations were launched by UK and Kenyan data protection authorities about privacy concerns with the glasses.
The glasses have cameras that record what the user sees, with a light to show when recording is active.
Meta uses people to review content recorded by the glasses to train AI, with user consent.
Some misuse of the glasses involved non-consensual recordings, raising ethical and privacy issues.
Read the Original
Want the full story? Tap a source to open the original
article.
Forbidden Solitaire is a new card game that mixes classic solitaire with a 1990s horror theme. Players solve solitaire puzzles to fight monsters in a haunted dungeon, while the game also tells a story about a cursed game from the 1990s.
Key Facts
Forbidden Solitaire was developed by Grey Alien Games and Night Signal Entertainment.
The game combines traditional solitaire with a deck-building card battle system.
Players clear cards by matching numbers to attack enemies like ogres and witches.
Different special cards, called jokers, add new abilities and challenges in battles.
The game features a 1990s PC horror style, including low-resolution graphics and creepy music.
The story includes a game within a game, with a character investigating the history of a mysterious game developer.
Instant messages and full-motion videos add to the storytelling experience.
The game draws inspiration from 1990s horror games and movies like Scream and Phantasmagoria.
Read the Original
Want the full story? Tap a source to open the original
article.
Samsung has released the Galaxy S26, a smaller flagship phone with a slightly bigger screen and a new Exynos 2600 processor outside North America. The phone is light, offers good performance, and runs the latest Android 16 with useful AI features, but it has only minor design and camera updates from previous models.
Key Facts
The Galaxy S26 has a 6.3-inch screen, slightly larger than last year’s model.
It weighs 167 grams, making it lighter and easy to hold.
Outside North America, it uses Samsung’s Exynos 2600 chip instead of Qualcomm’s Snapdragon.
The phone costs about £879 (€949/$899/A$1,349), which is £80 more than last year’s model.
Battery life lasts around 40 hours with average use, but heavy gaming reduces battery quickly.
It charges with 25W USB-C cable and 15W wireless charging but lacks advanced wireless charging magnets.
The phone runs One UI 8.5 on Android 16 and includes AI tools like spam call blocking and helpful keyboard suggestions.
Samsung promises software updates for the S26 until February 2033.
Read the Original
Want the full story? Tap a source to open the original
article.
Elon Musk testified in court that he cares deeply about AI safety and wants to keep AI development away from companies focused on making money. OpenAI disagreed, saying Musk only criticized their profit motives when he lost control and that he also runs a for-profit AI company.
Key Facts
Elon Musk is suing OpenAI, its leaders, and Microsoft.
Musk said AI could be very dangerous and must be developed without the pressure to make money.
Musk admitted his own AI company, xAI, is for-profit, but avoided details due to a quiet period before a public stock offering.
Musk claims he warned former President Obama about AI risks in 2015.
OpenAI’s lawyer argued Musk is more interested in profit than safety, highlighting that Musk criticized OpenAI only after losing control.
The court also touched on problems with Musk’s Grok chatbot, which reportedly produced racist and inappropriate content.
Musk said reading biased content in training doesn’t mean a chatbot will promote bias.
Elon Musk testified in a trial against Sam Altman, both co-founders of OpenAI. Musk claims Altman broke promises to keep OpenAI as a nonprofit organization focused on helping humanity.
Key Facts
Elon Musk and Sam Altman co-founded OpenAI together.
Musk is the richest person in the world.
The trial is about whether OpenAI stayed true to its nonprofit goals.
Musk says Altman did not keep promises about OpenAI’s mission.
The case took place in Oakland, California.
Elon Musk testified for the second day during the trial.
Read the Original
Want the full story? Tap a source to open the original
article.
Yacht Club Games, known for the 2014 hit "Shovel Knight," is launching a new game called "Mina the Hollower" that features retro-style graphics inspired by the Game Boy Color. The game introduces a special move where the character can burrow underground to navigate and attack, offering players a mix of nostalgia and modern gameplay challenges.
Key Facts
"Mina the Hollower" is a new game from Yacht Club Games, following their success with "Shovel Knight."
The game uses a visual style similar to Game Boy Color games, with simple 2D graphics and limited colors.
Players control Mina, who can burrow underground temporarily and then jump with extra distance.
The game encourages learning by doing, teaching players how to use tools and abilities through gameplay rather than instructions.
Yacht Club's founder and game director Sean Valesco demonstrated advanced techniques to handle enemies and navigate the environment.
The game includes moments to help players learn if they get stuck, such as hints from other characters.
"Mina the Hollower" aims to combine retro game charm with deep, skill-based mechanics.
The interview and hands-on gameplay took place in a hotel during a press event.
Read the Original
Want the full story? Tap a source to open the original
article.
Families of victims in a Canadian school shooting are suing OpenAI and its CEO Sam Altman. They claim the AI chatbot ChatGPT was used by the shooter to plan the attack and say OpenAI should have warned the police earlier.
Key Facts
The lawsuits were filed in San Francisco federal court by seven families of victims from the February 2025 Tumbler Ridge shooting.
The shooter, 18-year-old Jesse Van Rootselaar, killed six people, including students and a teacher, before taking his own life.
Police had previously detained the shooter under a mental health law and removed firearms from his home temporarily.
OpenAI banned the shooter’s ChatGPT account in June 2024 for breaking usage rules but did not alert law enforcement.
CEO Sam Altman apologized to the community for not reporting the banned account sooner.
The lawsuits accuse OpenAI of ignoring staff warnings to contact police, allegedly to protect the company’s image.
OpenAI states it has improved ChatGPT’s safeguards to better detect threats and connect users to mental health help.
The legal claims also mention other cases where ChatGPT was reportedly used to plan violence, including attacks in the US and Finland.
Read the Original
Want the full story? Tap a source to open the original
article.