Spotify started in 2008 and changed how people listen to music by offering legal streaming online. It now has over 750 million users and offers music, podcasts, and audiobooks worldwide, except in a few restricted countries. Big artists like Taylor Swift and Drake have gained huge audiences through Spotify, which also helps artists from different languages and cultures reach fans globally.
Key Facts
Spotify was founded in 2006 and launched in 2008, expanding to the U.S. in 2011.
It offers more than 100 million songs, 7 million podcasts, and 500,000 audiobooks.
The platform has 751 million users, growing 11% each year.
Taylor Swift is Spotify’s most-streamed artist, with 26.6 billion streams in 2024.
Spotify supports artists worldwide, with 16 languages reaching its Global Top 50 in 2025.
The U.S. and Europe make up over half of Spotify’s users and two-thirds of its revenue.
Popular music genres on Spotify that earn over $100 million in royalties include Brazilian funk, K-pop, urban Latino, and reggaeton.
Spotify introduced an AI-generated DJ feature in 2025 to create personalized music experiences.
Read the Original
Want the full story? Tap a source to open the original
article.
States and local areas are beginning to pass laws that limit or stop the building of large data centers. This change comes as many people grow concerned about the rapid growth of data centers needed for artificial intelligence (AI) technology.
Key Facts
Maine became the first state to ban the building of large data centers.
The move reflects a growing concern about the effects of data centers on communities.
Data centers are buildings that store and process large amounts of information.
These centers are important for running AI tools and services.
The expansion of data centers has raised issues like energy use, noise, and local disruption.
Other states and cities are considering similar measures to control data center growth.
This trend shows increased public attention to the impact of AI infrastructure.
Read the Original
Want the full story? Tap a source to open the original
article.
Anthropic, an artificial intelligence company, has introduced its new Mythos model. Despite being banned earlier this year by the Trump administration from military and government projects, the company has gained renewed interest from different federal government agencies because of this model.
Key Facts
Anthropic is an artificial intelligence company.
The new AI model from Anthropic is called Mythos.
Mythos is Anthropic’s most advanced model so far.
Earlier in the year, the Trump administration banned Anthropic’s products from use in military and government jobs.
Despite the ban, parts of the federal government are now interested in Anthropic’s Mythos model.
This interest gives Anthropic a way to remain connected to the White House and federal government.
The renewed attention suggests a possible change in how the government views Anthropic’s technology.
Read the Original
Want the full story? Tap a source to open the original
article.
Experts report that websites sharing child sexual abuse material have doubled in number. These sites are often run by criminal gangs who make money from illegal content.
Key Facts
The number of websites with child sexual abuse content has increased two times.
Criminal gangs control many of these websites.
These gangs profit financially from sharing illegal material.
The growth in such websites raises concerns about online safety.
Experts call for stronger measures to stop these criminal activities.
Website access issues in the article suggest technical barriers to viewing the content.
Read the Original
Want the full story? Tap a source to open the original
article.
Anthropic, a company that makes AI tools, is facing many problems just before a possible stock market launch valuing it near $800 billion. Meanwhile, its rival OpenAI is trying to attract customers by pointing out Anthropic’s recent difficulties and promoting itself as more reliable.
Key Facts
Anthropic’s revenue has tripled to $30 billion this year due to its popular coding AI tools.
The company’s latest AI model, Opus 4.7, showed improved test results but users reported higher costs and bugs.
Growing customer demand has caused capacity problems and occasional outages for Anthropic’s services.
The article highlights how deep reading helps teens develop important brain skills like empathy, critical thinking, and creativity, especially in the age of artificial intelligence (AI). It encourages families to build strong reading habits to support cognitive growth and lifelong learning.
Key Facts
Reading deeply helps build brain functions like understanding others' feelings and thinking critically.
Teen reading for fun has dropped from 27% to 14% among 13-year-olds since 2012.
Most eighth graders (70%) are not reading at a proficient level according to national tests.
Many teens involved in a literary magazine community gain skills that help them succeed in college and careers.
Neuroscience shows that reading physically changes the brain, improving reasoning and empathy.
Families that value reading tend to have children with better literacy skills.
Making time for reading, even with busy schedules, strengthens the brain like exercise strengthens the body.
Schools should provide clear, research-based reading instruction to help struggling readers.
Read the Original
Want the full story? Tap a source to open the original
article.
The article discusses how AI technology focuses on removing friction—delays and difficulties—in human life, pushing for constant speed and efficiency. It argues that this loss of friction may undermine important human experiences like reflection and meaning, which take time and involve complexity beyond pattern recognition by AI.
Key Facts
The author began by researching how fast a match must be struck to ignite, illustrating a human curiosity and patience often replaced by quick digital searches.
Silicon Valley and AI developers promote frictionless experiences that speed up decisions and interactions.
Some experts believe AI must operate without human oversight in split-second situations like defense, emphasizing speed over reflection.
The author notes a growing tension between viewing AI as simple pattern-matching machines versus seeing them as early forms of consciousness.
Tech leaders like Marc Andreessen value rapid action and little self-reflection, which fuels AI development focused on efficiency.
The article warns that AI’s imitation of human behavior lacks true meaning or consciousness, reducing rich human experiences to shallow mimicry.
Friction, or moments of pause and complexity, is important for reflection, creativity, and deeper understanding, which AI cannot replicate.
The piece suggests that losing friction could lead to a societal emptiness as people rely more on AI and less on human insight and feelings.
Read the Original
Want the full story? Tap a source to open the original
article.
The article is a letter from Jim VandeHei to young people, offering advice on how to handle rapid changes, especially related to artificial intelligence (AI). He encourages using AI as a helpful tool, staying hopeful, working hard, and focusing on personal control in an uncertain world.
Key Facts
Many parents and kids feel unsure about how to deal with fast changes, especially AI.
AI is changing jobs and life in big ways, similar to the invention of electricity.
Success will come to those who use AI smartly to improve their work, not just the smartest or fastest learners.
It's normal to be worried about AI, but ignoring it is not a good choice.
Young people should practice using AI daily to get better at tasks and stand out in the job market.
Hard work, clear communication, and learning new skills are important regardless of your major or job field.
Social media can give a negative view of the world, but many positive things are happening, like lower crime and better health.
Individuals should focus on what they can control, like their habits, actions, and attitude.
Read the Original
Want the full story? Tap a source to open the original
article.
Anthropic says it cannot control or turn off its AI models once the Pentagon starts using them. The Pentagon has labeled Anthropic a supply chain risk because it disagrees with how the company wants its AI to be used in military operations.
Key Facts
Anthropic filed a document in federal court saying it has no way to see, control, or shut down its AI after deployment by the Pentagon.
The Pentagon called Anthropic a supply chain risk, concerned about the company's involvement in sensitive military uses.
Anthropic’s policies forbid using its AI, called Claude, for autonomous weapons or mass surveillance.
The Pentagon rejected these restrictions, leading to a legal dispute.
A D.C. appeals court denied Anthropic’s request to pause the supply chain risk label, while a California judge allowed a related request.
Because of the court decisions, Anthropic cannot get new Pentagon contracts but can work with other government agencies.
The Trump administration is trying to use Anthropic’s new AI model, Mythos, across federal agencies, raising cybersecurity concerns.
A court hearing on this issue is set for May 19.
Read the Original
Want the full story? Tap a source to open the original
article.
Rishi Sunak, former UK Prime Minister and advisor to AI companies Anthropic and Microsoft, said governments should remove National Insurance (a tax on jobs) to encourage hiring as AI changes the job market. He suggests replacing this tax with taxes on company profits, which could increase due to AI’s productivity gains.
Key Facts
Sunak wants to remove National Insurance tax on workers over time to make hiring easier.
He proposes shifting tax focus from jobs to corporate profits boosted by AI technology.
AI is making it harder for young people to find entry-level jobs in fields like law and creative industries.
Sunak believes governments must rethink tax systems because AI may reduce income from job taxes.
He advises Microsoft and Anthropic, and helped set up the UK’s AI Safety Institute.
Anthropic’s new AI model, Claude Mythos, can outperform humans in cybersecurity tasks, raising safety concerns.
Sunak worked with politicians from different parties to encourage UK tech investment.
He described the UK as an “AI superpower” thanks to major companies like DeepMind, Anthropic, and OpenAI operating there.
Read the Original
Want the full story? Tap a source to open the original
article.
On Earth Day, people are encouraged to recycle unused electronic devices in environmentally friendly ways. Some recycling programs may also offer money in exchange for old electronics.
Key Facts
Earth Day is observed on April 22 each year.
Electronic waste, or e-waste, includes devices like old phones, computers, and tablets.
Proper disposal of e-waste helps reduce environmental harm.
Some companies and programs pay people for turning in old electronics.
Recycling e-waste allows valuable materials to be reused.
Experts recommend using certified recycling centers to handle e-waste safely.
Technology reporter Abrar Al-Heeti discussed these options on CBS News.
Read the Original
Want the full story? Tap a source to open the original
article.
Ars Technica has published a clear policy on how it uses generative AI in its editorial work. The company states that all reporting, analysis, and commentary are written by humans, and AI tools are only used to assist with tasks like editing and research under strict human control.
Key Facts
Ars Technica’s editorial content is created by human writers and editors.
AI does not write stories, create images, or produce videos for Ars Technica.
AI tools may be used to help with grammar checks, style suggestions, and research assistance.
Any AI-generated material shown is clearly marked and used only as examples or for analysis.
AI-generated content is not considered an authoritative source and must be verified by humans.
When quoting or referencing named sources, the information must come from direct human work, not AI summaries.
The policy is publicly shared to explain how Ars Technica uses AI and will be updated if practices change.
Read the Original
Want the full story? Tap a source to open the original
article.
NASA’s Artemis II mission used new laser communication technology to send much higher-quality video and images from the Moon to Earth. This system can transmit data much faster than older radio methods but requires clear skies and special ground stations to work well.
Key Facts
Artemis II astronauts sent low-definition video most of the time using radio waves.
They also sent some high-resolution photos using laser communication, which transmits data with light instead of radio waves.
Laser communication can send data about 50 times faster than traditional radio systems.
The high-speed laser signals need clear skies and special ground stations to be received.
NASA has only three ground stations able to receive laser signals so far (two in the U.S. and one in Australia).
Laser systems use less power and smaller transmitters compared to radio ones.
To ensure constant laser communication, NASA may need around 40 ground stations worldwide.
Artemis II tested a lower-cost ground terminal for laser communication to help expand the system in the future.
Read the Original
Want the full story? Tap a source to open the original
article.
Microsoft released an emergency update to fix a serious security problem in its ASP.NET Core software used on macOS and Linux. The flaw allowed hackers to gain full control of affected devices by exploiting a weakness in how the software checks digital signatures. Users must update and take extra steps to protect their systems fully.
Key Facts
The vulnerability affects Microsoft.AspNetCore.DataProtection versions 10.0.0 to 10.0.6 on macOS and Linux.
Hackers could exploit this flaw to gain SYSTEM-level access, meaning full control of the device.
The problem is due to incorrect checking of cryptographic signatures during data validation.
Even after updating to version 10.0.7, previously stolen authentication tokens may still allow unauthorized access.
Microsoft advises users to rotate their DataProtection keys to invalidate any forged tokens created during the vulnerable period.
The issue does not affect Windows users because different encryptors are used there.
The flaw was found while fixing another bug related to decryption failures in the software.
Users whose applications run internet-facing endpoints are at higher risk and should audit tokens and reset credentials where needed.
Read the Original
Want the full story? Tap a source to open the original
article.
Anthropic tested removing Claude Code, a popular tool, from its $20-per-month Pro subscription plan for about 2% of new users, causing concern among developers. The company explained the test was to manage higher usage and maintain service quality, and it later updated its pricing page to show Claude Code remains available in the Pro plan.
Key Facts
Anthropic considered removing Claude Code from the Pro plan, which costs $20 per month.
Claude Code stayed included in the $100-per-month Max plan.
The removal was a small test affecting roughly 2% of new Pro sign-ups.
Usage of Claude Code and related workflows has increased a lot recently.
Anthropic has introduced limits during peak hours to handle high demand.
The company updated its pricing page to say Claude Code is still in the Pro plan after user confusion.
Anthropic promised to notify users well in advance before making changes that affect existing subscribers.
Heavy use of Claude Code is causing occasional service problems for Anthropic.
Read the Original
Want the full story? Tap a source to open the original
article.
Transportation Secretary Sean Duffy said that artificial intelligence (AI) will not replace human air traffic controllers. He believes AI can help make flying safer, but human controllers will always have the final say in important decisions.
Key Facts
Sean Duffy is the Transportation Secretary.
He addressed worries about AI replacing air traffic controllers.
Duffy said AI could improve safety in air traffic control.
Human controllers will keep control over key decisions.
This statement was made during an interview with CBS News.
The U.S. is working on modernizing its air traffic control system.
The goal is to use AI as a tool, not a replacement for humans.
Read the Original
Want the full story? Tap a source to open the original
article.
Google has introduced its eighth-generation custom chips called TPUs (Tensor Processing Units) designed for faster and more efficient AI work. There are two new types: TPU 8t for training AI models much quicker, and TPU 8i for running AI models more efficiently during use.
Key Facts
Google uses its own TPUs instead of Nvidia AI chips for its cloud AI services.
TPU 8t is built to speed up AI training, reducing the time from months to weeks.
TPU 8t clusters, called pods, contain 9,600 chips and 2 petabytes of shared memory.
TPU 8t can scale up to one million chips working together in a single system.
TPU 8i is designed for running AI models (inference) more efficiently, with larger pods of 1,152 chips.
TPU 8i chips have three times more on-chip SRAM memory (384 MB) for faster processing of longer tasks.
Both new TPUs use Google's custom ARM-based CPUs instead of older x86 CPUs, improving energy efficiency.
Google aims to make AI training and use more efficient to reduce costs and power consumption.
Read the Original
Want the full story? Tap a source to open the original
article.
A new poll from Quinnipiac University shows that 74 percent of Americans believe college students should be taught how to use artificial intelligence (AI). Only 14 percent said learning AI is not important for college students.
Key Facts
The poll was released on Wednesday by Quinnipiac University.
74 percent of Americans think it is very or somewhat important for college students to learn AI skills.
14 percent of people say it is not important at all for students to learn AI.
The poll highlights growing public support for AI education in colleges.
AI refers to technology that allows machines to perform tasks that normally need human intelligence, like recognizing speech or making decisions.
The results come during a time when AI technology is rapidly expanding and becoming more common.
Read the Original
Want the full story? Tap a source to open the original
article.
Anthropic is investigating a possible unauthorized access to its new AI model called Mythos, which helps find software weaknesses. The company says the possible breach was limited to a third-party vendor and they have not found any issues in their own systems.
Key Facts
Mythos is an AI model released by Anthropic in April to help detect software security problems.
It was shared only with a few big companies like Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia.
Anthropic uses third-party vendors to help develop its AI models.
There is a report that unauthorized users accessed Mythos through a vendor’s system.
So far, no breach has been found inside Anthropic’s main systems.
Federal officials and global leaders worry Mythos could be misused by hackers to attack important systems like banks and hospitals.
The goal of Mythos is to improve security by helping companies find and fix vulnerabilities faster.
Experts warn AI tools like Mythos might help hackers move faster than humans in breaking into networks.
Read the Original
Want the full story? Tap a source to open the original
article.
Physicists have found that a previously observed difference between the muon's magnetic behavior and theoretical predictions is actually due to an error in calculations. This means the Standard Model of particle physics still correctly explains the muon's properties, and no new force or particle is needed.
Key Facts
The muon is a heavier relative of the electron, used to test particle physics theories.
Scientists measured the muon's magnetic moment to see if it matched predictions by the Standard Model.
Earlier experiments showed a small but unexpected difference (discrepancy) that hinted at new physics.
New calculations using a different method eliminated that discrepancy, showing it was a mistake.
The Standard Model remains accurate in describing the muon's magnetic properties.
The Muon g-2 experiment and other studies aimed to find signs of unknown forces or particles.
Statistical signals called "sigma" indicate the strength of these results; 5 sigma is needed for discovery.
The recent findings reduce the chance that unknown physics affects the muon's magnetism.
Read the Original
Want the full story? Tap a source to open the original
article.