Elon Musk is involved in a legal case against Sam Altman, co-founder of OpenAI. Musk claims that OpenAI has moved away from its original goal as a nonprofit organization.
Key Facts
Elon Musk, CEO of Tesla, is suing Sam Altman, a co-founder of OpenAI.
The lawsuit started with opening statements on a Tuesday.
Musk alleges OpenAI abandoned its nonprofit mission.
OpenAI was initially established as a nonprofit company.
The case is being covered by major news outlets such as CBS News.
The trial involves testimony from Elon Musk.
Maxwell Zeff, a writer for Wired, is providing analysis on the case.
The dispute highlights differences in visions for the future direction of OpenAI.
Read the Original
Want the full story? Tap a source to open the original
article.
“Margo’s Got Money Troubles” is a TV show based on a 2024 novel by Rufi Thorpe. The series started on Apple TV in April 2026 and releases new episodes every Wednesday until May 20, 2026.
Key Facts
The show is about Margo Millet, a 20-year-old who dropped out of college and struggles with money.
Margo turns to the adult website OnlyFans to support herself and her baby.
The story follows her family, including her mother, an ex-waitress, and her father, an ex-pro wrestler.
Elle Fanning stars as Margo; other stars include Michelle Pfeiffer, Nick Offerman, Nicole Kidman, and Greg Kinnear.
Episode 5, titled “Flamingoes,” will air on April 29, 2026.
The show has eight episodes total, all available exclusively on Apple TV.
Episodes air weekly, with the first three episodes released on April 15, 2026.
The show mixes drama and comedy as it covers Margo’s financial and personal challenges.
Read the Original
Want the full story? Tap a source to open the original
article.
The Federal Communications Commission (FCC) wants to review Disney's broadcast license earlier than planned. This review is connected to an investigation into Disney's diversity and inclusion efforts and happens during a disagreement between the White House and Jimmy Kimmel from ABC.
Key Facts
The FCC has ordered an early review of Disney’s broadcast license.
The review relates to Disney’s diversity, equity, and inclusion (DEI) practices.
The FCC official shared this information with CBS News.
This situation is taking place while the White House and ABC’s Jimmy Kimmel have an ongoing dispute.
Disney owns ABC, the network where Jimmy Kimmel works.
The early review by the FCC is unusual and may affect Disney’s broadcast operations.
The FCC regulates television and radio broadcast licenses in the United States.
The outcome of the review could influence Disney’s future broadcasting permissions.
Read the Original
Want the full story? Tap a source to open the original
article.
Elon Musk and Sam Altman, founders of OpenAI, are in a court trial in California over disagreements about the company’s direction and finances. Musk claims OpenAI’s commercial activities violate its charitable mission and seeks billions in damages, while OpenAI says Musk left the company and now wants to harm a competitor.
Key Facts
Musk and Altman are involved in a legal dispute over OpenAI's history and purpose.
Musk says OpenAI should remain a charity and that commercial actions have harmed this mission.
Musk donated $38 million to OpenAI when it was a non-profit and claims he helped start it.
OpenAI counters that Musk left the company after failing to gain control.
Musk wants financial compensation and leadership changes at OpenAI.
OpenAI accuses Musk of trying to damage the company out of jealousy.
Musk’s new AI company, xAI, launched a chatbot called Grok in 2023 but is behind OpenAI’s ChatGPT.
The judge warned both sides not to influence the trial through public statements but did not impose a gag order.
Read the Original
Want the full story? Tap a source to open the original
article.
In January 2026, U.S. authorities expanded drone no-fly zones to include moving and unmarked vehicles of the Department of Homeland Security (DHS). This policy raised concerns among drone pilots and caused confusion because the zones were large, constantly moving, and difficult to detect, putting many drone operators at risk of unintentionally violating the rules.
Key Facts
The no-fly zones were expanded to cover 3,000 feet horizontally and 1,000 feet vertically around DHS ground vehicles, even when moving and unmarked.
Before this change, no-fly zones mainly applied to certain federal facilities or military vehicles.
Drone pilots risk penalties, including civil and criminal punishment, if their drones are seen as security threats near these vehicles.
Minneapolis drone pilot Rob Levine stopped flying after the expansion due to fear of enforcement actions.
The Federal Aviation Administration (FAA) described the new restrictions as "ambiguous," admitting any flight might accidentally violate the rules.
The policy created large, moving restricted airspaces that drone operators could not easily identify or avoid.
The Drone Service Providers Alliance called the policy an "impossible compliance problem" for drone users.
The changes followed increased federal law enforcement actions and incidents that raised tensions in Minneapolis in early 2026.
Read the Original
Want the full story? Tap a source to open the original
article.
This episode explores the challenges faced by outsourced tech workers in Kenya, focusing on why more than a thousand lost their jobs. It also discusses how people can improve their use of artificial intelligence (AI) by changing how they communicate with it, and features a company that turns lamp-posts into small data centers.
Key Facts
Over 1,000 outsourced tech workers in Kenya were made redundant (lost their jobs).
The episode investigates the reasons behind these job losses.
It talks about ways to get better results from artificial intelligence by adjusting communication methods.
An author shares advice on how to interact with AI effectively.
The advice is tested live with an AI system during the show.
A company is converting lamp-posts into mini data centers to support digital services.
The program is presented by Chris Vallance and produced by Tom Quinn.
The episode is part of the World Service and lasts about 26 minutes.
Read the Original
Want the full story? Tap a source to open the original
article.
Google has agreed to provide its artificial intelligence (AI) technology to the Pentagon for secret government projects. This deal allows the Pentagon to use Google's AI models for any lawful government work, despite some Google employees expressing concerns about the agreement.
Key Facts
Google signed a contract with the Pentagon to supply AI models.
The AI technology will be used for classified or secret government projects.
The Pentagon can use the AI for any legal government purpose.
Hundreds of Google employees raised concerns about the deal recently.
The agreement shows growing cooperation between tech companies and the U.S. military.
The article does not specify which AI models Google is providing.
The deal is part of ongoing government interest in advanced AI technology.
No details were given about how the AI will be protected or monitored.
Read the Original
Want the full story? Tap a source to open the original
article.
Elon Musk has filed a lawsuit against Sam Altman over OpenAI’s change from a nonprofit to a for-profit company. A trial has started in Oakland to decide if OpenAI and Altman did anything wrong, with both sides expected to reveal new information during the proceedings.
Key Facts
OpenAI was originally founded as a nonprofit but later changed to a for-profit model.
Elon Musk sued Sam Altman, focusing on this change.
A nine-person jury was seated in Oakland to hear the case.
The trial will focus first on whether Altman and OpenAI acted improperly and then on possible remedies.
OpenAI recently missed its goals for new users and revenue.
Some OpenAI shareholders have questioned Altman’s leadership amid challenges going public.
Musk is preparing a big public stock offering (IPO) for SpaceX, which now includes his AI company, xAI.
The trial could reveal new information that may affect the reputations and futures of both men and their companies.
Read the Original
Want the full story? Tap a source to open the original
article.
Taylor Swift has filed three new trademark applications to protect her voice and image. Experts believe these filings aim to prevent misuse of her likeness through artificial intelligence (AI).
Key Facts
Swift filed two sound trademarks capturing her saying "Hey, it's Taylor Swift" and "Hey, it's Taylor."
The third trademark is a visual image of Swift holding a pink guitar on stage.
These trademarks were filed with the U.S. Patent & Trademark Office and are awaiting review.
Legal experts say trademarks can help stop unauthorized use of celebrities' voices and images by AI.
Taylor Swift has been the victim of AI misuse, including fake pornographic deepfake images and a false political endorsement.
Actor Matthew McConaughey also filed similar voice and image trademarks to protect against AI misuse.
McConaughey has made a deal with an AI company to allow controlled use of his voice through voice-cloning technology.
Read the Original
Want the full story? Tap a source to open the original
article.
Dr. Tom Mihaljevic, CEO of the Cleveland Clinic, says artificial intelligence (AI) can help improve health care but is not a complete solution. The health system must first fix existing problems and prepare well to successfully use AI tools for better and more affordable care.
Key Facts
Dr. Mihaljevic leads the Cleveland Clinic, which works with AI companies like Palantir, IBM, and Oracle.
He believes AI is not a magic fix but can speed up changes needed in health care.
The biggest issue in health care today is providing timely, high-quality care at affordable prices, which the current system fails to do.
Cleveland Clinic spent five years restructuring to unify its services and data across 370 sites worldwide for better AI integration.
The clinic hired AI and data experts from outside health care, especially from Silicon Valley, to bring new knowledge.
Recruiting tech talent is expensive, and many health systems may not afford it, raising concerns about a growing gap between those with and without AI resources.
There is awareness in the health industry about the risk that AI could increase disparities in care quality and access.
The U.S. Department of Health and Human Services is involved in addressing these concerns.
Read the Original
Want the full story? Tap a source to open the original
article.
Japan Airlines plans to test humanoid robots at Tokyo’s Haneda Airport starting in May 2026 to help with tasks like sorting luggage and loading cargo. This experiment aims to address the shortage of human workers at airports and will run until 2028, with the possibility of expanding robot use to other airport jobs.
Key Facts
The trial will start in May 2026 and last until 2028 at Haneda Airport in Tokyo.
Japan Airlines and its subsidiary JAL Ground Service are working with GMO AI & Robotics Corporation for the tests.
The robots being tested are the G1 and Walker E models from Chinese companies Unitree Robotics and UBTECH Robotics.
Humanoid robots differ from typical factory robots as they must work in more complex, changing environments like airports.
Robots will begin with tasks such as baggage handling and cargo loading, with potential future roles in cleaning and handling equipment.
Haneda Airport is Japan’s second-largest airport, with flights arriving roughly every two minutes.
Japan faces a labor shortage in airport ground staff, with a decline in crew numbers from 26,300 in 2019 to 23,700 in 2023 nationwide.
Previous robot demonstrations show they still need human help to complete tasks like moving cargo containers.
Read the Original
Want the full story? Tap a source to open the original
article.
U.S. Cyber Command is building a system to use the most effective artificial intelligence (AI) models for cyber operations, regardless of where the models come from. Their goal is to stay flexible and use the best technology for both defending and attacking in cyberspace, while following existing military rules.
Key Facts
Cyber Command plans to test and use powerful AI models from any country or company, ignoring political issues.
Anthropic has held back some AI models from government use due to concerns about hacking capabilities.
The Pentagon is negotiating access to Anthropic’s Mythos Preview model, but only some agencies currently have it.
OpenAI is actively working with government groups to offer its AI product, GPT-5.4-Cyber.
2026 is the first year Cyber Command has dedicated funding specifically for AI programs.
Cyber Command wants infrastructure that lets operators quickly switch between different AI models as technology changes.
Human control is important; full AI autonomy without human oversight is not allowed.
Military rules guide the use of AI in cyber missions, especially to avoid harming civilian sites like hospitals and schools.
Read the Original
Want the full story? Tap a source to open the original
article.
The U.S. Supreme Court is reviewing a lawsuit against Cisco, a large tech company, accused of helping the Chinese government persecute members of the Falun Gong spiritual group using its technology. The court is deciding whether Cisco can be held responsible under U.S. laws for aiding human rights abuses in China.
Key Facts
The lawsuit accuses Cisco of providing technology used to track and persecute Falun Gong members in China.
Cisco argues it should not be held liable under the Alien Tort Statute and the Torture Victim Protection Act.
The Supreme Court is weighing how broadly these laws apply to companies’ overseas activities.
Falun Gong members claim much of Cisco’s work involving China happened in the U.S., justifying the lawsuit in American courts.
Past investigations showed U.S. tech companies helped build China’s surveillance system, despite warnings about misuse.
In 2008, Cisco identified Falun Gong content as a threat and created systems to monitor its members.
Justice Sotomayor expressed concern that Cisco knowingly helped a government that tortures people.
The court’s decision is expected by late June 2024.
Read the Original
Want the full story? Tap a source to open the original
article.
Elon Musk and Sam Altman, co-founders of OpenAI, appeared at the start of a federal trial in Oakland, California. The trial involves a dispute between them that may affect how artificial intelligence is developed in the future.
Key Facts
The trial began with opening statements and a jury was chosen on Monday.
The trial is expected to last about three weeks.
The case involves a conflict between Musk and Altman, who were once friends and partners at OpenAI.
OpenAI shifted from a nonprofit startup focused on helping people to a business now valued at $852 billion.
Musk, the richest person in the world with about $778 billion, will testify during the trial.
Altman, CEO of OpenAI, and Satya Nadella, CEO of Microsoft, are also expected to testify.
Microsoft helped fund the release of ChatGPT in late 2022, an AI chatbot that boosted interest in AI technology.
Read the Original
Want the full story? Tap a source to open the original
article.
Google has signed a deal with the US Pentagon to allow the military to use its artificial intelligence (AI) models for classified work. The agreement lets the Pentagon use Google’s AI for any lawful government purpose, but the company’s employees have raised concerns about the ethical use of AI in military projects.
Key Facts
Google joined other AI companies like OpenAI and xAI in signing deals with the Pentagon for classified AI use.
The contract lets the Pentagon use Google’s AI on classified networks for tasks like mission planning and weapons targeting.
Google must help adjust AI safety settings and filters if the government requests it.
The agreement says the AI should not be used for domestic mass surveillance or autonomous weapons without human oversight.
Google employees have protested the deal, worrying their AI work could be used harmfully or unethically.
More than 600 Google workers signed a letter asking the CEO to stop providing AI for classified government work.
Alphabet, Google’s parent company, removed a ban on AI use for weapons and surveillance last year.
The Pentagon says it wants AI to support lawful government uses without mass surveillance or fully autonomous weapons.
Read the Original
Want the full story? Tap a source to open the original
article.
A court case began between Elon Musk and OpenAI co-founder Sam Altman over OpenAI’s change from a non-profit to a for-profit company. Musk claims the company broke promises and unfairly benefited its leaders, while OpenAI says Musk’s lawsuit is motivated by jealousy and competition.
Key Facts
Elon Musk and Sam Altman are involved in a legal dispute in a California courtroom.
Musk co-founded OpenAI but left in 2018; he argues OpenAI changed its mission and self-enriched its leaders.
OpenAI denies the claims and says Musk filed the lawsuit to harm a competitor since he started his own AI company, xAI.
The trial involves testimony from Musk, Altman, and other tech executives like Microsoft’s CEO Satya Nadella.
The case is about broken promises, not technical AI details, according to the judge.
Musk is seeking $134 billion in damages and wants Altman and Brockman removed from OpenAI’s leadership.
OpenAI plans to go public with a valuation of about $1 trillion later this year.
Jury selection showed some jurors had negative views of Musk and AI, but the judge emphasized the case is legal, not technical.
Read the Original
Want the full story? Tap a source to open the original
article.
Taylor Swift’s company applied for trademarks to protect her voice and image because of concerns about artificial intelligence (AI) creating fake audio and videos that imitate her. These trademarks cover specific phrases she says and an image of her on stage, aiming to stop unauthorized use.
Key Facts
Taylor Swift filed three trademark applications with the U.S. Patent and Trademark Office.
Two trademarks cover her voice saying “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.”
The third trademark is for an image of Swift holding a guitar on stage.
These actions respond to AI creating “deepfakes,” which are fake videos or audio that show people doing or saying things they did not do.
Intellectual property lawyer Josh Gerben says this is becoming a common protection tool against AI misuse.
Actor Matthew McConaughey also filed to trademark a famous line to protect his voice.
Current trademark laws help fight copying but were made before AI technology advanced.
Read the Original
Want the full story? Tap a source to open the original
article.
The fast growth of data centers, which help run artificial intelligence (AI), has caused political pushback at the community level. This opposition has led to strict actions like a ban on new data centers in Maine and concerns about high electricity costs and the reliability of the power grid.
Key Facts
Data centers are expanding quickly to support AI technologies.
Some local communities have reacted negatively to this growth.
Maine has banned new data center development because of these concerns.
Rising electricity bills are one reason for the pushback.
Some proposed solutions from companies or officials have been seen as ignoring local concerns.
The power grid is facing serious challenges in handling the increased demand from data centers.
There is worry that the power grid could become unreliable due to this stress.
Read the Original
Want the full story? Tap a source to open the original
article.
General Motors (GM) is upgrading millions of its 2022 and newer Cadillac, Chevrolet, Buick, and GMC vehicles with Google Gemini, an AI assistant that makes cars smarter through software updates sent over the air. This upgrade will replace Apple CarPlay and Android Auto by 2028 and aims to offer drivers a more natural and helpful in-car digital experience.
Key Facts
GM will upgrade about 4 million U.S. vehicles from model year 2022 and newer with Google Gemini software.
The upgrade is free and delivered over-the-air, meaning no dealership visits are needed.
Google Gemini is an artificial intelligence (AI) assistant that learns driver habits to provide better help over time.
GM plans to phase out Apple CarPlay and Android Auto in all new vehicles by 2028, even in gas-powered cars.
The new system can do tasks like rerouting traffic, translating texts, and finding parking suitable for trailers.
GM says the AI assistant aims to reduce driver stress and improve the usefulness of time spent in cars.
Future updates will make the AI more integrated with GM’s OnStar safety and support more languages and markets.
GM began developing its own AI assistant as part of its GM Forward strategy announced in late 2023.
Read the Original
Want the full story? Tap a source to open the original
article.
GitHub will change how it charges for its Copilot AI service, moving to a system where users pay based on how much they actually use the AI starting June 1. This is to better match costs with usage and help keep the service financially sustainable as demand grows.
Key Facts
GitHub Copilot currently gives users monthly "requests" but those cover many different AI tasks with varying computing costs.
Under the new model, subscribers will get "AI Credits" based on their subscription, and pay extra if they use more credits.
Costs will depend on the number of AI tokens used, which measure input and output in AI interactions.
Simple AI features like code completion will still be free of credit use, but code reviews will cost additional GitHub Actions minutes.
The change responds to a rise in high AI usage that has nearly doubled Copilot’s weekly costs since January.
GitHub paused new signups and tightened usage limits to maintain service quality before this pricing change.
Other AI companies like Anthropic are also moving to usage-based billing to manage rising AI computing costs.
GitHub says this approach will help keep Copilot reliable and financially sustainable as demand increases.
Read the Original
Want the full story? Tap a source to open the original
article.