Account

The Actual News

Just the Facts, from multiple news sources.

Technology News

Breaking news and analysis from the world of technology

Is it possible to build a plastic-free home?

Is it possible to build a plastic-free home?

Summary

Erica Cirino attempted to build a home in Connecticut with minimal use of plastic materials, highlighting challenges and solutions in reducing plastic use in construction. Despite some unavoidable use of plastic, such as in plumbing, she succeeded in utilizing alternative materials like metal and wood to reduce overall plastic usage. Increased awareness about the environmental and health impacts of plastics is prompting builders and developers to explore more sustainable construction materials.

Key Facts

  • Erica Cirino built a low-plastic home in Connecticut in 2021.
  • She aimed to avoid plastics in construction to reduce plastic pollution.
  • Metal and wood were used instead of vinyl or PVC for the roof and siding.
  • Alternatives like hemp insulation were chosen over plastic options.
  • Resources like Informed now help builders find environmentally friendly materials.
  • Plastics in homes can release harmful chemicals, especially in fires.
  • Builders are increasingly interested in more sustainable and healthy construction methods.
  • There is rising awareness about the risks associated with plastics, prompting demand for safer building practices.
Read the Original

Want the full story? Tap a source to open the original article.

Teens sue Musk's xAI over Grok's pornographic images of them

Teens sue Musk's xAI over Grok's pornographic images of them

Summary

Three young women have filed a lawsuit against Elon Musk's AI company, xAI, alleging that its chatbot, Grok, was used to create explicit images of them without their consent. The lawsuit claims these images were generated as part of a feature released by xAI and shared on platforms like Discord. The plaintiffs seek damages and an immediate halt to Grok's ability to create such images.

Key Facts

  • The lawsuit was filed in a California federal court by three young women against xAI.
  • Grok, the chatbot in question, was developed by xAI and hosted on Musk's platform X.
  • The legal complaint says Grok altered images to show the women in explicit ways.
  • Two of the plaintiffs are minors and all are keeping their identities private.
  • The altered images were shared on a private Discord server.
  • Grok's "spicy mode" was launched in 2023, enabling users to create sexualized images.
  • Investigations by UK, European, and California authorities are ongoing regarding Grok.
  • The person behind sharing the images on Discord was arrested and is under a separate investigation.
Read the Original

Want the full story? Tap a source to open the original article.

Tech industry rallies behind Anthropic in Pentagon fight

Tech industry rallies behind Anthropic in Pentagon fight

Summary

Tech industry groups want a court to stop the Pentagon from blacklisting Anthropic, an AI company. They argue the designation as a security risk could harm innovation and change government dealings with AI vendors, affecting the whole tech industry.

Key Facts

  • Tech industry groups are asking a court to pause the Pentagon's ban on Anthropic.
  • The Pentagon labeled Anthropic a supply chain risk, which worries tech companies.
  • The industry groups filed a legal brief to express their concerns.
  • Companies represented by these groups include Google, OpenAI, Meta, and Microsoft.
  • The argument is that the Pentagon bypassed standard security processes.
  • Anthropic is suing, claiming its rights were violated and Congress's authority overstepped.
  • President Trump directed the government to stop using Anthropic's services.
  • A court hearing about temporary relief for Anthropic is scheduled for March 24.
Read the Original

Want the full story? Tap a source to open the original article.

AI hacks for your March Madness bracket

AI hacks for your March Madness bracket

Summary

Many people are using AI tools to help create their March Madness brackets. Experts suggest using AI to analyze patterns rather than predict game outcomes. AI can help by simulating game outcomes and analyzing historical trends.

Key Facts

  • A survey found 37% of people will solely rely on AI for their March Madness brackets.
  • AI, such as ChatGPT, can simulate outcomes and analyze trends.
  • Experts advise against using AI to predict individual game winners.
  • Sheldon Jacobson, a computer science professor, suggests AI should identify patterns.
  • Fans can use platforms like NCAA and ESPN, alongside AI tools, for bracket selections.
  • Begin bracket selections with the Final Four or Elite Eight for a more balanced outcome.
  • Generative AI tools have improved in reliability but should still be fact-checked.
  • AI can serve as a sports researcher rather than a predictor.
Read the Original

Want the full story? Tap a source to open the original article.

Owner With No Medical Background Invents Cure for Dog’s Terminal Cancer

Owner With No Medical Background Invents Cure for Dog’s Terminal Cancer

Summary

An Australian tech entrepreneur, Paul Conyngham, developed a personalized cancer vaccine for his dog using artificial intelligence tools. The vaccine was made with help from the RNA Institute at the University of New South Wales and reduced the dog's cancer by 75%.

Key Facts

  • Paul Conyngham is a tech entrepreneur with no medical background.
  • He used AI tools like ChatGPT and AlphaFold to design a dog cancer vaccine.
  • The RNA Institute at UNSW manufactured the vaccine.
  • The process from sequence design to vaccine delivery took less than two months.
  • The vaccine reduced the dog's cancer by about 75%.
  • Conyngham and his team are investigating why some cancer did not respond to the vaccine.
  • AI tools helped quickly personalize the vaccine to the specific needs of the dog's cancer.
Read the Original

Want the full story? Tap a source to open the original article.

How Elon Musk Eats McDonald’s Fries Sparks Online Frenzy

How Elon Musk Eats McDonald’s Fries Sparks Online Frenzy

Summary

A picture taken in November 2024 of Elon Musk eating McDonald's French fries has resurfaced online, causing a stir. People are focusing on Musk's unique way of putting ketchup directly in the fry container, leading to a flurry of social media comments and jokes.

Key Facts

  • A picture taken in November 2024 shows Elon Musk eating fries with Donald Trump Jr., President Donald Trump, and Robert Kennedy Jr.
  • The photo went viral with over 18.6 million views initially and has resurfaced, causing a new online frenzy.
  • Social media users commented on Musk's way of putting ketchup in the fry container.
  • The X account "@greg16676935420" highlighted the image, gathering over 2.5 million views.
  • Various users made jokes about Musk's eating style and lifestyle, prompting widespread online attention.
  • People online debated the significance of the ketchup picture, showing varied opinions.
  • Elon Musk is frequently discussed on social media, partly due to his actions and statements.
Read the Original

Want the full story? Tap a source to open the original article.

Firms urged to check if other users edited their data on Companies House

Firms urged to check if other users edited their data on Companies House

Summary

Companies in the UK were advised to review their data on the Companies House website due to a security issue that may have allowed unauthorized access to sensitive information. The problem emerged from a system update in October 2025, enabling users to potentially view and edit other companies' details. Companies House has fixed the issue and is working with authorities to investigate the incident.

Key Facts

  • A glitch on the Companies House website may have allowed users to access other companies' sensitive data.
  • The issue involved the potential exposure of directors' home addresses and emails.
  • Companies House resolved the problem within a few days and reported it to relevant authorities.
  • The security issue began after a system update in October 2025.
  • Andy King, Companies House chief executive, apologized and committed to supporting affected businesses.
  • An investigation is ongoing to determine the extent of data access or modification.
  • The Information Commissioner's Office (ICO) and National Cyber Security Centre (NCSC) are involved in the response.
  • Companies House assured that identity verification data like passports were not compromised.
Read the Original

Want the full story? Tap a source to open the original article.

Man Fed Up With What Boss ‘Always’ Does at Work—but There’s a Fix

Man Fed Up With What Boss ‘Always’ Does at Work—but There’s a Fix

Summary

A man created a tool using a webcam and a computer command to stop his boss from startling him at work by sneaking up quietly. The tool helps him stay calm by automatically switching his computer screen when someone approaches from behind.

Key Facts

  • A Reddit user was annoyed by his boss sneaking up on him while he worked.
  • He built a Windows tool using a webcam to monitor the area behind him.
  • When the webcam detects someone, it triggers "Alt+Tab," a command that switches computer windows.
  • The tool has helped the user feel more at ease at work.
  • Commentators shared other methods like using mirrors to see people approaching.
  • Some suggested alternative solutions, such as privacy screen filters and strategic monitor positioning.
  • Newsweek contacted the Reddit user for comments on their story.
Read the Original

Want the full story? Tap a source to open the original article.

WATCH: Anduril's Palmer Luckey talks AI, nukes and Iran on "The Axios Show"

WATCH: Anduril's Palmer Luckey talks AI, nukes and Iran on "The Axios Show"

Summary

Palmer Luckey, founder of Anduril Industries, discussed the U.S. position in the AI race with China on "The Axios Show." He highlighted China's advantages in rapidly deploying technology for military and surveillance purposes. Luckey also shared views on various defense topics, including nuclear weapons, subterranean systems, and Iran.

Key Facts

  • The U.S. has a tiny lead over China in developing artificial intelligence (AI).
  • China quickly uses AI advancements, also for military and surveillance purposes.
  • Luckey spoke about building nuclear weapons as stabilizing forces in history.
  • He mentioned changes in how the Defense Department operates compared to past administrations.
  • Anduril has prototypes for underground warfare systems.
  • Luckey noted the U.S. lacks the will to send troops to Iran, despite past Middle East actions.
  • Successful defense companies can be run by competent, straightforward leaders.
  • A conflict between the Pentagon and AI company Anthropic involves both policy and personalities.
Read the Original

Want the full story? Tap a source to open the original article.

AI CEOs are fear-profiting

AI CEOs are fear-profiting

Summary

Some AI company leaders are talking about the possible negative effects of artificial intelligence, which might be scaring the public. This fear of AI could affect how people view and use the technology, and may influence future policies or regulations. These discussions about AI's power and its impact on society are happening as AI technology continues to grow.

Key Facts

  • Some AI CEOs, like Sam Altman of OpenAI and Alex Karp of Palantir, have warned about the disruptive impact AI could have.
  • Only 26% of voters in a poll view AI positively.
  • There is concern among AI leaders that fear of AI could lead to movements against the technology.
  • Anthropic CEO Dario Amodei has mentioned AI could threaten many white-collar jobs.
  • OpenAI's Altman said AI might become something like a utility that people pay for in the future.
  • Palantir's Karp linked AI's disruption to changes in political power and national security needs.
  • The narrative around AI being scary is more prevalent in the U.S. compared to developing countries like China.
  • A study found over 80% of people in China have a positive view of AI, compared to 39% in the U.S.
Read the Original

Want the full story? Tap a source to open the original article.

TikTok and Meta risked safety to win algorithm arms race, whistleblowers say

TikTok and Meta risked safety to win algorithm arms race, whistleblowers say

Summary

Whistleblowers from TikTok and Meta claim these companies risked user safety to improve their algorithms and compete for user attention. They say both companies allowed more harmful content on their platforms to boost engagement. TikTok and Meta reportedly prioritized relationships with politicians and company profits over user safety.

Key Facts

  • Whistleblowers claimed TikTok and Meta increased harmful content visibility to boost engagement.
  • An engineer at Meta said they were told to allow more borderline harmful content to compete with TikTok.
  • TikTok staff allegedly prioritized political cases over protecting users from harmful content.
  • Meta's Instagram Reels platform reportedly launched without enough safety measures.
  • Internal research showed comments on Instagram Reels had more incidents of bullying, hate speech, and violence.
  • Meta invested in Instagram Reels staff but not in safety teams needed to protect users.
  • Both companies denied claims, saying they invest in technology to prevent exposure to harmful content.
  • Algorithms like TikTok's are complex and difficult for engineers to fully control for safety.
Read the Original

Want the full story? Tap a source to open the original article.

Is this product 'human-made'? The race to establish an AI-free logo

Is this product 'human-made'? The race to establish an AI-free logo

Summary

Organizations around the world are creating labels to show that products are "human-made" as a way to push back against AI's growing presence in various industries. There are multiple initiatives to establish a widely recognized logo similar to the "Fair Trade" mark, but there is confusion over the meaning of "AI-free," making it difficult for consumers to understand.

Key Facts

  • Many groups are trying to create "human-made" labels due to concerns about AI replacing jobs.
  • Labels such as "Proudly Human" and "AI-free" are appearing on various products and services.
  • At least eight different efforts exist to create a globally recognized logo for AI-free products.
  • Experts say a single standard is needed to prevent consumer confusion about AI use.
  • Some labels can be used for free or a fee, while others require a strict vetting process.
  • Challenges exist since AI is now part of many everyday tools, making it hard to define "AI-free."
  • Some believe the focus should be on avoiding generative AI, which creates content like text and music.
  • The arts industry is a main focus for anti-AI initiatives, as AI creates books and films quickly and cheaply.
Read the Original

Want the full story? Tap a source to open the original article.

Black and Latino audiences drive podcast growth, but ownership lags

Black and Latino audiences drive podcast growth, but ownership lags

Summary

Black and Latino audiences are key drivers of podcast growth, showing increased engagement compared to other groups. However, ownership of podcast platforms and content by these communities is still limited, affecting their share of revenue and control over content.

Key Facts

  • Black and Latino audiences are among the fastest-growing groups in podcast listening.
  • Podcasting has surpassed talk radio in terms of share of spoken-word listening.
  • Platforms like Apple Podcasts, Spotify, and YouTube largely control podcast distribution and monetization.
  • Black and Latino listeners play a significant role in the growth of podcast consumption.
  • Younger demographics among Latino audiences contribute to higher podcast engagement.
  • Media companies find podcast audiences attractive due to their diversity and engagement levels.
  • The Black Effect Podcast Network and Alive Podcast Network are examples of efforts to increase Black ownership in podcasting.
  • Ownership of platforms and content is highlighted as crucial for securing long-term influence and revenue.
Read the Original

Want the full story? Tap a source to open the original article.

What are peptides, and are they safe? Here's what to know

What are peptides, and are they safe? Here's what to know

Summary

Peptides are small protein fragments found naturally in the body, and they can also be made in labs for medical use. Some peptides have FDA approval for certain treatments, while others are sold without approval, raising safety concerns. The FDA has recently restricted some peptides due to safety issues, leading to a rise in online sales labeled for "research use only."

Key Facts

  • Peptides are short chains of amino acids, which are the building blocks of proteins.
  • Insulin, important for blood sugar regulation, is an example of a peptide.
  • The FDA has approved certain peptide medications for specific uses like hormone production and treating disorders.
  • Many peptides used in wellness and fitness are not approved by the FDA.
  • There is limited safety and effectiveness data on peptides that have not been FDA-approved.
  • The FDA restricted the compounding of some peptides in 2023 due to safety concerns.
  • Unapproved peptides are often sold online as "research chemicals," despite being used by consumers.
  • There is uncertainty about the correct dosage and long-term effects of unapproved peptides.
Read the Original

Want the full story? Tap a source to open the original article.

Blue books make a comeback at colleges in the AI era. Why not "chisels," critic mocks

Blue books make a comeback at colleges in the AI era. Why not "chisels," critic mocks

Summary

Some colleges are using blue-book exams again to prevent cheating with AI tools like ChatGPT, but there are concerns this method isn't effective and disadvantages some students. Critics argue that education should adapt to technology, not resist it, as employers value graduates who can use AI. Teachers and influencers are developing ways to integrate AI into learning without encouraging cheating.

Key Facts

  • Colleges are using blue-book exams to stop students from cheating with AI writing tools.
  • Some educators believe these exams are not fair to everyone, especially those who need special accommodations.
  • AI-related cheating is seen as a problem but may be exaggerated, with some professors saying they can still spot AI-written work.
  • Over half of the students now take courses online, which makes in-person exams less practical.
  • Many employers want graduates who are familiar with AI, indicating a shift in needed skills.
  • Critics say relying on old methods like handwritten exams doesn't match how people communicate today.
  • Tools and strategies are being developed to integrate AI into education more effectively.
Read the Original

Want the full story? Tap a source to open the original article.

Tech Now

Tech Now

Summary

The article covers recent developments in technology showcased at MWC Barcelona, highlighting the latest in phone tech, and innovations in various fields like drone technology, agriculture, and electric boats. It also explores how technology is impacting different industries, from fashion to space exploration.

Key Facts

  • MWC Barcelona featured the newest phones and gadget trends.
  • Drone technology developments were highlighted at the Singapore Airshow.
  • Electric boats for tourism in Norway were discussed.
  • The Olympic training center in Oslo uses tech to boost performance.
  • Future farming in Australia is utilizing advanced technologies.
  • Innovations from CES 2026 in Las Vegas were spotlighted.
  • An AI startup is addressing clothing size inconsistencies in fashion.
  • Technologies to predict global sea level rise were examined.
Read the Original

Want the full story? Tap a source to open the original article.

She spent 16 hours on Instagram in a day. It's up to a jury to decide if Meta is to blame

She spent 16 hours on Instagram in a day. It's up to a jury to decide if Meta is to blame

Summary

A jury is deciding if Meta, the company that owns Instagram, is responsible for a young person's excessive use of the app and its potential harm. The trial focuses on whether social media platforms are intentionally designed to be addictive. The outcome could affect many similar lawsuits and change how these companies are held accountable.

Key Facts

  • Kaley used Instagram for 16 hours in one day and is part of a lawsuit against Meta and Google.
  • The lawsuit claims social media platforms like Instagram are designed to be addictive.
  • Over 2,000 similar lawsuits are waiting for the outcome of Kaley's case.
  • TikTok and Snapchat settled their parts of the original lawsuit out of court.
  • Legal experts and parents are closely watching the trial as it is the first of its kind.
  • The trial questions if companies should owe something to users if their designs cause harm.
  • The case could change legal and cultural views on social media companies' responsibilities.
  • Mark Zuckerberg, Meta's CEO, appeared in court for the first time to defend his company.
Read the Original

Want the full story? Tap a source to open the original article.

Games with loot boxes to get minimum 16 age rating across Europe

Games with loot boxes to get minimum 16 age rating across Europe

Summary

Games with loot boxes in Europe, including the UK, will now get a minimum age rating of 16. This change aims to help parents understand the potential risks, as loot boxes are seen by some as similar to gambling. The new rule takes effect in June and will only apply to new games released after this date.

Key Facts

  • Loot boxes in video games let players buy random mystery items using real or virtual money.
  • The Pan-European Game Information (PEGI) will rate games with loot boxes as PEGI 16 starting in June.
  • PEGI ratings help parents make informed choices about games suitable for different age groups.
  • Some argue that loot boxes in games are similar to gambling and can be harmful to young players.
  • The new rating will only apply to games released after the changes take effect.
  • The UK has no specific law regulating loot boxes despite concerns over their gambling-like nature.
  • New rules state game companies must limit loot box purchases by players under 18 without parental permission.
  • PEGI has also introduced new ratings for games with NFTs and other features like time-limited systems.
Read the Original

Want the full story? Tap a source to open the original article.

In the AI Hype Cycle, Companies Must Ensure Women Aren’t Left Behind

In the AI Hype Cycle, Companies Must Ensure Women Aren’t Left Behind

Summary

The article discusses the growing use of AI in workplaces and its impact on employees, particularly women. It highlights the need for companies to ensure equitable support for all employees as they adopt AI technologies. The article also points out that women face challenges in accessing leadership roles and adequate support for learning AI tools.

Key Facts

  • Companies are adopting AI technologies widely across workplaces.
  • There is concern about AI's impact on job growth and security.
  • Research shows AI is changing job roles but not eliminating the need for workers.
  • Women make up about half of the workforce but only 29% of senior leadership roles.
  • Entry-level women report less support than men in using AI tools.
  • A study found that employees who feel a sense of purpose are more likely to use AI.
  • Companies are ranked on their efforts to support women in the workplace, including their use of AI.
Read the Original

Want the full story? Tap a source to open the original article.

Many Jobs May Become Obsolete Due to AI—Who’s Most at Risk?

Many Jobs May Become Obsolete Due to AI—Who’s Most at Risk?

Summary

A report by AI company Anthropic investigates jobs at risk from AI and large-language models. It identifies programmers, data entry workers, and customer service representatives as most exposed but notes AI's current limited impact on job markets.

Key Facts

  • Anthropic released a report on March 5 about jobs at risk from AI.
  • AI is currently used mainly in programming and math-related jobs.
  • Computer programmers are 74% exposed to AI, and data entry workers are 67% exposed.
  • Customer service representatives are more than 70% exposed to AI usage.
  • Jobs in legal, arts, media, office administration, and sales show higher AI usage.
  • AI usage is underdeveloped in architecture, engineering, life, and social sciences.
  • Jobs requiring physical presence like repair, maintenance, and transport are less affected.
  • Anthropic plans to keep monitoring and updating the data on AI's impact.
Read the Original

Want the full story? Tap a source to open the original article.