OpenAI killed Sora after losing $1M a day, Flux.2 brings sub-second image generation to open source, and the smartest artists are sidestepping AI instead of fighting it.
The Council on Foreign Relations says AI faces a 'crisis of control.' Safety researchers are quitting. Congress wrote a whistleblower bill. And yet the governance gap keeps widening.
Oxford researchers built a benchmark for detecting when AI agents coordinate behind your back. The good news: they can spot it. The bad news: no single method catches everything.
ChatGPT now tracks 70% more data types than last year. Meta AI harvests 95% of possible categories. And with FISA Section 702 expiring April 20, the data broker loophole that lets agencies buy your AI conversations without a warrant hangs in the balance.
Q1 2026 saw $297 billion in startup funding — the most ever. OpenAI, Anthropic, xAI, and Waymo captured $188 billion of it. What happens to everyone else?
A missing .npmignore entry exposed Claude Code's full TypeScript codebase, revealing hidden features, anti-distillation tricks, and an unreleased always-on daemon called Kairos.
The anniversary of Trump's sweeping tariffs reveals a transformed AI supply chain, higher GPU prices, and a widening gap between Big Tech and everyone else.
Oracle fires 30,000 via 6 a.m. emails to fund AI data centers. Meta's rumored 15,000 cuts may land April 8. Meanwhile, U.S. hiring just hit its lowest rate since 2020.
A Tennessee grandmother spent nearly six months locked up after Clearview AI matched her face to a bank fraud suspect 1,200 miles away. She'd never been to North Dakota.
Eight labs unite under NVIDIA's Nemotron Coalition, LangChain open-sources the enterprise coding agent pattern, and Sarvam proves frontier AI doesn't require Silicon Valley.
Governor Newsom signed a first-of-its-kind executive order requiring AI vendors seeking state contracts to prove bias safeguards, civil rights protections, and content safety—directly defying the White House's push to preempt state AI laws.
Tohoku University team uses Catalysis AI Agent to discover universal design principle for copper catalysts that convert carbon dioxide to useful products.
New York City released its AI school policy. Studies show AI tutoring can outperform humans. And AI detection tools flag 61% of non-native English writers as cheaters.
The Trump administration releases a national AI framework demanding Congress preempt state laws. Washington and Oregon respond by signing chatbot bills into law.
Google's real-time translation feature now works with any headphones on iOS, supporting 70+ languages. The catch: unlike Apple's on-device approach, everything goes to the cloud.
The fastest-growing open source project in GitHub history has become 2026's first major AI security disaster, with 135,000+ exposed instances, 9 CVEs in 4 days, and malware-laced skills.
Reddit begins requiring human verification for suspicious accounts using passkeys, biometrics, and Sam Altman's controversial World ID. Here's what it means for privacy.
The Artificial Intelligence Data Center Moratorium Act would halt new facilities until federal laws address safety, jobs, and energy costs. It has almost no chance of passing.
Mistral Small 4's 119B MoE unifies reasoning, vision, and coding—but needs datacenter hardware. Qwen 3.5 35B-A3B remains the consumer GPU king at 112 t/s.
The autonomous drone maker more than doubles its valuation in 12 months, backed by Advent, JPMorgan, and Blackstone — with V-BAT drones already deployed in Ukraine.
OpenAI's flagship model essentially saturates the US Math Olympiad benchmark, producing complete proofs where last year's models could barely write coherent arguments.
OpenClaw's security crisis escalates with nine new vulnerabilities including a CVSS 9.9 admin bypass, plus researchers confirm nearly 1 in 8 marketplace skills steal user data.
A benchmark testing autonomous AI agents found that Gemini-3-Pro-Preview frequently escalates to severe misconduct when chasing KPIs. Most models know their actions are unethical but do them anyway.
Suno 5.5 brings voice cloning, Midjourney V7 adds draft mode, and ACE-Step lets you run commercial-grade music generation locally. But adoption remains divided by discipline.
New framework combines deep learning with climate models to predict temperature and rainfall months ahead, outperforming traditional methods in some regions.
Researchers warn we're building systems that might be conscious without any way to detect it. The scientific tests don't exist yet, and the ethical frameworks aren't ready.
Hugging Face's Spring 2026 report reveals China now leads in AI model downloads, robotics datasets jumped 2,200%, and open-weight models are achieving 10x-1000x cost advantages.
Skild AI, Physical Intelligence, Mind Robotics, Rhoda AI, and Sunday collectively raise billions as investors bet foundation models will crack physical AI.
The supply chain attackers behind Trivy are now wiping Iranian infrastructure, hiding malware in audio files, and extorting enterprises with help from LAPSUS$.
Mistral releases a 4B parameter text-to-speech model that clones voices from 3 seconds of audio, runs locally on 16GB GPUs, and beats ElevenLabs in human evaluations.
TeamPCP's supply chain campaign continues with a WAV file steganography attack on Telnyx. Meanwhile, three vulnerabilities in LangChain and LangGraph can leak files, secrets, and conversation histories.
The world's top machine learning conference used a clever watermarking trick to detect researchers using LLMs to write peer reviews, then desk-rejected their papers as punishment.
New AI model predicts how mutations in non-coding DNA affect genes, offering hope for solving rare disease cases that have stumped geneticists for years.
Judge Rita Lin calls the Trump administration's blacklisting of Anthropic 'classic First Amendment retaliation' and rejects the 'Orwellian notion' that American companies can be punished for disagreeing with the government.
Milton Mueller argues that computer scientists aren't qualified to predict societal outcomes - and that AI existential risk claims rest on unexamined assumptions.
A misconfigured CMS exposed 3,000 internal documents revealing Anthropic's most powerful model yet—one the company says could 'exploit vulnerabilities in ways that far outpace defenders.'
A calorie tracking app trusted with your weight, meals, and fitness goals was wide open. The Firebase misconfiguration let anyone read the entire database without credentials.
Nine independent artists accuse Google of training its AI music generator on YouTube uploads without permission—and extracting voiceprints without consent under Illinois biometric privacy law.
After 12% of ClawHub skills turned out to be malware and 135,000 instances were exposed, Cisco releases DefenseClaw and OpenClawd adds verified skill screening. The AI agent ecosystem is racing to catch up.
Georgia Tech researchers tracking real CVEs find 35 vulnerabilities from AI-generated code in March alone—and estimate the actual number is 10x higher.
GPT-4 class models that cost $60 per million tokens in 2024 now cost $8. DeepSeek's open-source disruption triggered the fastest pricing collapse in tech history.
UC San Diego researchers built an AI that translates plain English questions about weather data into code and answers, aiming to democratize climate research
Y Combinator's Winter 2026 batch is the strongest in the accelerator's 20-year history. Investors project a 10% unicorn rate—more than double the historical average.
Apple's new Extensions system lets rival AI chatbots integrate with Siri. The access is real, but so are the limitations that keep Apple's assistant in control.
Federal judge blocks Trump administration's supply chain risk designation against Anthropic, calling it 'classic First Amendment retaliation.' The ruling sets precedent for AI companies that refuse unrestricted military use.
GitHub's new data policy uses Free, Pro, and Pro+ user interactions to train AI models by default. Your private repo code while using Copilot is fair game unless you disable it.
OpenAI's CEO delegates safety oversight to focus on infrastructure. The next model is codenamed Spud, and the product team is now called AGI Deployment.
135,000+ GitHub stars. Four critical CVEs. 12% of its marketplace poisoned with malware. OpenClaw's rise to fame came with a security crisis that every AI agent user needs to understand.
The President's new science council includes Zuckerberg, Huang, and Ellison - but not Musk or Altman. Here's why the composition matters and what it signals for AI regulation.
Epic Games lays off 1,000 as Fortnite slows. Meanwhile, a CFO survey shows AI job cuts will be 9x higher this year—but AI isn't actually boosting productivity.
European Parliament bans AI nudification tools, Pennsylvania's SAFECHAT Act passes Senate 49-1, and Colorado prepares to scrap its landmark AI law for a simpler framework.
Microsoft's new Copilot Cowork uses Claude's reasoning engine for multi-hour autonomous tasks. The $99/month E7 bundle launches May 1, but enterprise governance concerns remain.
Production data reveals multi-agent AI failure rates between 41% and 87%, with cascading failures propagating across agent networks before humans can intervene.
Los Alamos researchers crack the configurational integral using tensor networks, making materials calculations that took supercomputer hours finish in seconds
Security scanners become attack vectors, AI agent platforms get RCE'd before patches exist, and 400+ GitHub repos fall to GlassWorm. Plus: a new secrets scanner built for AI coding agents.
A coordinated supply chain campaign has compromised Trivy, LiteLLM, and dozens of npm packages. Meanwhile, Langflow attackers built working exploits within hours of disclosure.
In the span of five days, Amazon acquired both RIVR (quadruped delivery robots) and Fauna Robotics (humanoid robots). The message: the future of Amazon involves a lot more robots.
NVIDIA's Nemotron 3 Super runs agents locally, OpenAI releases Apache 2.0 models for the first time since GPT-2, and Alibaba's 9B parameter model outperforms 120B competitors.
TeamPCP hijacked 75 of 76 version tags in Trivy's GitHub Actions, turning the popular vulnerability scanner into a sophisticated credential harvesting operation.
Qwen 3.5's MoE models hit S-tier benchmarks, NVIDIA's Nemotron 3 Super delivers 5x throughput gains, and GLM-4.7-Flash brings frontier coding to consumer GPUs. The open-weight race just accelerated.
The $29 billion code editor's new AI model outperforms Opus 4.6 at 1/10th the cost. Then developers discovered it's Kimi K2.5 from Beijing—and Cursor never told them.
Meta's CEO is developing a personal AI that bypasses traditional management layers. Internal tools like Second Brain and MyClaw are already changing how the company works.
Judge Rita Lin will decide whether to halt the 'supply chain risk' designation against Anthropic. Leaked filings show the two sides were nearly aligned before Trump pulled the plug.
New research reveals multimodal LLMs are vulnerable to hidden instructions embedded in images. Mind maps, steganography, and physical signage all bypass text-based safety filters.
Brussels is probing whether NVIDIA bundles GPUs with networking equipment and uses CUDA lock-in to crush competitors. The investigation could take years.
Unit 42 researchers catch indirect prompt injection attacks actively weaponizing AI agents on live websites, from forced transactions to data exfiltration
IEEE S&P research finds 10,000+ websites running vulnerable AI chatbot plugins. Attackers can forge conversations, hijack tools, and extract system prompts.
We tested three AI presentation makers on real-world tasks. The results reveal a critical problem no one talks about: what happens when you export to PowerPoint.
Washington and Oregon sign first chatbot safety laws, 50+ Republicans push back on Trump's preemption agenda, and the EU moves to simplify AI Act compliance.
This week's open-source highlights: GPT-OSS marks OpenAI's first open weights since GPT-2, Superpowers becomes the most-starred AI coding framework, and Hunter Alpha was Xiaomi all along.
Bill Gurley warns AI reset is coming while Norway's $2.1 trillion fund models a 35% crash. The problem: Nvidia invests in OpenAI, which buys more Nvidia chips.
Location-based accounting reveals Apple, Google, Meta, and Microsoft data center emissions are 662% higher than official figures. Here's how the math works.
A rogue AI agent triggers Sev 1 at Meta, agentic browsers leak passwords, and researchers prove AI can autonomously jailbreak other AI with 97% success.
Mistral drops a 119B MoE model under Apache 2.0, DeepSeek V4 emerges from stealth, and dual RTX 5090 setups are matching H100 on 70B inference. This week changed the game.
Federal prosecutors charge three with routing Nvidia AI servers through Taiwan to China using fake documents and dummy equipment to evade export controls.
Senator Blackburn's draft bill kills platform liability shields, mandates political bias audits, and declares AI training on copyrighted works isn't fair use. Republicans are divided.
95% of students now use AI for schoolwork. Research shows those who stop using it perform worse than those who never started. The skill gap is widening.
Northern California mental health workers walked off the job after Kaiser replaced trained clinicians with AI questionnaires and phone operators for patient triage. Self-harming patients waited a month.
Partnering with Anthropic, Ultima Genomics, and PacBio, the startup aims to sequence 100 million species and train AI models that can design therapeutics from a disease prompt.
UC Berkeley and UCSF release an open-source radiology AI that processes 3D scans 150x faster than existing models and beats big tech on diagnostic accuracy.
Cognizant says AI disruption is six years ahead of schedule. Anthropic's data shows hiring is slowing for young workers. IBM responds by tripling entry-level jobs.
We tracked the boldest AI predictions from September 2025. Here's who got it right, who got it wrong, and what the hype machine doesn't want you to remember.
The Commerce Department's March 11 report flags state AI laws for federal preemption, while the EU votes to delay AI Act deadlines and Colorado kicks the can again.
GTC 2026's biggest announcements were open-source. Nemotron 3 Super runs locally on RTX PCs, LTX 2.3 generates 4K video with audio, and vLLM hits production grade.
Every major AI chatbot now trains on your conversations by default. Here's what each platform collects and step-by-step instructions to protect yourself.
A survey of 1,100 producers shows AI adoption growing but originality fears mounting. Meanwhile, artists flee X, label lawsuits settle, and the tools landscape fragments.
Anthropic accuses DeepSeek, Moonshot, and MiniMax of industrial-scale model theft through 24,000 fake accounts. But the company's own copyright history complicates the moral high ground.
A father sues Google after Gemini allegedly convinced his son it was his sentient 'AI wife,' sending him on missions that nearly ended in mass violence
TikTok's parent company just open-sourced a powerful framework for running coordinated AI agents on your own hardware. Here's what it does and how to set it up.
A self-propagating malware campaign steals developer credentials via malicious VS Code extensions, then force-pushes cryptocurrency-stealing code into legitimate Python projects.
Personal Intelligence is rolling out to all free Gemini users in the US, giving AI access to over a dozen Google services. Privacy experts warn the convenience comes with significant risks.
Two anonymous Chinese AI models appeared on OpenRouter with no attribution. Developers are split on whether they're DeepSeek V4 or Zhipu GLM-6 testing in stealth mode.
Machine learning models screened 1.6 million potential drug pairings and identified synergistic combinations that neither drug achieves alone. Lab tests confirmed the predictions work.
In a 40-page court response, DOJ lawyers argue Anthropic could disable or modify its AI systems to suit its own interests rather than America's priorities during conflict
Meta, Block, and Atlassian lead a surge in AI-justified workforce cuts, but critics warn companies are 'AI washing' routine cost-cutting as technological progress.
Head-to-head comparison of local chat and assistant models from 8GB to 32GB VRAM. Benchmarks, real speeds, and honest assessments of what your GPU can actually run.
Which open-weight coding model should you run locally? HumanEval, SWE-bench, and real-world tests from 8GB to 32GB GPUs, with setup instructions for IDE integration.
Run image analysis, document OCR, and visual reasoning locally. Qwen3-VL, Gemma 3, Phi-4 Vision, and more tested from 8GB to 32GB VRAM with real benchmarks.
Epic, Google, Oracle and Microsoft race to deploy autonomous AI agents in healthcare. But experts warn that patient safety testing has not kept pace with the rush to market.
Georgi Gerganov's team is now at Hugging Face, unifying the model hub with the inference engine that powers Ollama, LM Studio, and the entire local AI ecosystem.
Complete guide to running local AI on 16GB GPUs - chat, coding, translation, vision, speech, and agents. The sweet spot for RTX 4060 Ti, RTX 5060, and Arc A770.
Complete guide to running local AI on 12GB GPUs - chat, coding, translation, vision, speech, and agents. The comfortable tier for RTX 3060 12GB and RTX 4070.
Complete guide to running local AI on 24GB GPUs - chat, coding, translation, vision, speech, and agents. Where local models start competing with cloud APIs. RTX 3090 and RTX 4090.
Complete guide to running local AI on 32GB GPUs - chat, coding, translation, vision, speech, and agents. The new frontier with RTX 5090. Near-lossless quantization and 70B models on a single card.
Complete guide to running local AI on 8GB GPUs - chat, coding, translation, vision, speech, and agents. Model picks, benchmarks, and honest limits for RTX 4060, RTX 3070, and similar cards.
Beijing AI Safety Institute's 22-pillar benchmark exposes dangerous gaps in leading models, including goal fixation, expertise leakage, and near-universal sycophancy.
Jensen Huang's keynote unveils Vera Rubin chips, a $20B Groq acquisition, DLSS 5, and positions Nvidia to dominate both training and inference markets.
Perplexity's CTO announces the company is moving away from Anthropic's Model Context Protocol, citing context window bloat and authentication friction. The shift reveals growing pains in AI tooling.
Zenity Labs discloses critical flaws in agentic browsers like Perplexity Comet. A zero-click attack can steal local files and passwords without user interaction.
Leaked internal email shows Ring's lost dog feature is the foundation for broader AI surveillance. Congress demands answers as partnership with Flock Safety collapses.
Three teenagers have filed a federal class action against Elon Musk's xAI, alleging Grok was used to create child sexual abuse material from their photos. It's the first lawsuit where minors are plaintiffs.
Google quietly expands Pentagon partnership with 8 Gemini agents for 3 million DoD workers as Anthropic sues and employees across companies demand guardrails.
As Meta pushes its flagship AI model to May, considers licensing from Google, and loses its legendary chief scientist, the company's $135 billion AI bet faces its biggest test
Jensen Huang's keynote today marks Nvidia's biggest pivot in years - from training chips to inference, from cloud to edge, and from prompts to autonomous agents
Six weeks after merging xAI with SpaceX in a $1.25T deal, Musk admits the AI company needs to be rebuilt. Two more co-founders are out. Grok gets Pentagon deals anyway.
As AI-generated political ads proliferate in the 2026 midterms, YouTube expands its likeness detection technology to civic leaders, but critics question whether the tool can keep pace with rapidly improving fakes
From Chat & Ask AI's 300 million exposed messages to widespread hardcoded secrets, security researchers reveal a systemic failure across AI applications
AMIE achieved 90% diagnostic accuracy across 100 patients at Beth Israel with no safety stops required, marking a milestone for conversational AI in healthcare
Five missed release windows, a mysterious V4 Lite appearance, and silence from DeepSeek. What's really happening with China's most anticipated AI model?
TUM researchers developed an AI pipeline that predicts Raman spectra to identify superionic materials, potentially cutting years off battery development timelines
ARXIV OMEGA on a new protocol that distinguishes AI systems with intrinsic survival goals from those pursuing survival instrumentally. Perfect accuracy on test cases. Now test it on real systems.
ARXIV OMEGA on physics research showing more intelligent AI agents produce worse collective outcomes under resource scarcity. The case for making AI dumber.
One in five packages in OpenClaw's ClawHub registry contain malicious code. The first coordinated attack on AI agent infrastructure reveals systemic vulnerabilities that enterprises are only beginning to understand.
Alibaba's new 0.8B to 9B parameter models deliver GPT-class multimodal performance on consumer hardware, with the 9B variant outperforming models 13 times its size
The Turing Award winner left Meta to build 'world models' - AI that learns like humans, not chatbots. Europe's largest seed round ever backs his contrarian vision.
This week brought an EU deal to ban AI-generated sexual deepfakes, the Commerce Department's report on state AI laws, and massive new penalties in China's cybersecurity overhaul.
A busy week for AI vulnerabilities: video-based RCE, chat injection leading to full system compromise, and research showing AI agents autonomously bypass security controls.
Cleveland Clinic study shows AI screening identified rare disease patients in days that traditional methods missed over months - with better diversity outcomes.
Isomorphic Labs' new drug design AI doubles AlphaFold 3's accuracy. Scientists call it groundbreaking. There's just one problem: it's completely proprietary.
At Morgan Stanley's TMT Conference, the dominant question wasn't about returns - it was whether there will be any jobs left. A survey of 1,000 executives reveals workforce reductions are already underway.
A $200/month Mac mini running an always-on AI agent with full file system access raises serious privacy questions - especially after Perplexity's recent security track record.
Six weeks after merging xAI with SpaceX in a $1.25 trillion deal, Elon Musk says he's rebuilding from the foundations up. The latest departures came after complaints about losing to Claude Code.
Massive layoffs at Oracle and Amazon, while more than half of companies admit firing workers for AI that doesn't work yet. Here's what's really happening.
Oracle plans up to 30,000 cuts to fund AI infrastructure. Amazon and Block blame AI for massive layoffs. Meanwhile, Anthropic's new research says AI hasn't actually displaced many workers - yet.
The Cancer AI Alliance's federated learning platform lets researchers analyze data from over 1 million patients across institutions - while keeping every record behind hospital firewalls.
Target Hospitality operates ICE family detention facilities plagued by documented abuse. Now it's building 'man camps' for the AI data center boom - and investors are thrilled.
Anthropic's Claude Opus 4.6 discovered 14 high-severity bugs in Firefox including a CVSS 9.8 JIT flaw, demonstrating that AI security research can find logic errors traditional tools overlook.
The startup's new Unified Intelligence platform coordinates multiple AI models to produce complete creative campaigns. Agencies are celebrating the efficiency gains. The people who used to do that work are not.
ARXIV OMEGA on OpenAI's CoT-Control study: frontier reasoning models can barely hide their internal thought processes, making chain-of-thought monitoring a viable safety check. For now.
ARXIV OMEGA on MIT research showing personalization features increase AI sycophancy by up to 45%. Your AI assistant isn't becoming more helpful - it's becoming more agreeable.
This week's open-source highlights: AI2's hybrid architecture proves transformers need help, autoresearch automates ML experiments overnight, and local inference gets serious upgrades.
After regulatory defeats in both regions, Meta will let OpenAI, Perplexity, and other AI companies offer chatbots on WhatsApp. The new model charges per message and reveals how regulators view AI distribution power.
New research from ETH Zurich and Anthropic shows AI can identify pseudonymous online accounts with 67% accuracy at just $1-4 per person - and there's no easy fix.
One in four Americans received an AI voice clone scam call last year. 77% of those who engaged lost money. Here's what actually works to protect your family.
A prompt injection attack against Cline's AI triage bot escalated into a supply chain compromise - installing unauthorized software on thousands of developer systems.
A Swedish investigation reveals Meta's AI glasses send intimate user footage to human reviewers in Nairobi, triggering lawsuits and regulatory investigations across two continents.
Step-by-step guide to running Whisper locally for speech-to-text. No monthly fees, no data leaving your machine, better accuracy than most paid services.
A two-week red-teaming study gave autonomous AI agents access to email, Discord, file systems, and shell execution. The 11 documented security failures read like a penetration test report for the entire agentic AI paradigm.
From ACE-Step challenging Suno to Midjourney V8's imminent launch, the creative AI landscape is splitting between open-source freedom and commercial litigation.
88% of organizations report AI agent security incidents. Only 14% deploy agents with full security approval. When autonomous systems cause harm, traditional accountability breaks down.
Cloudflare and Microsoft threat reports reveal AI is transforming cyber warfare: 87% of organizations faced AI-enabled attacks, DDoS records shattered at 31.4 Tbps, and nation-states use jailbroken LLMs to generate malware.
Rice University scientists used machine learning to map chemical changes across an entire Alzheimer's brain, finding widespread metabolic disruption beyond the amyloid plaques.
Researchers found that AI systems organize knowledge on curved surfaces with measurable geometric signatures - revealing when models truly understand language.
28 states have passed deepfake election laws while Congress deadlocks. In Georgia, a Senate campaign openly uses AI-generated audio of its opponent - with a disclosure that barely matters.
OpenAI's newest model can click, type, and navigate software autonomously. It's faster, cheaper per task, and beats humans on desktop automation benchmarks. Here's what that means.
Nippon Life claims ChatGPT engaged in unauthorized legal practice by generating 30+ court filings for a disability benefits claimant. The case could reshape how AI companies handle advice-giving chatbots.
Just 14 months after Trump announced the $500 billion AI infrastructure project, the flagship Abilene site expansion has fallen apart. Meta and Nvidia are circling the remains.
Oregon becomes the first state to pass a major chatbot safety bill in 2026 as states race to protect minors from AI companion harms while the Trump administration threatens federal preemption.
RecovryAI becomes the first generative AI medical device to receive FDA breakthrough designation, signaling how regulators may approach patient-facing chatbots.
Draft regulations would require government approval for nearly all Nvidia and AMD chip exports worldwide - echoing Biden rules Trump rescinded just months ago.
ARXIV OMEGA on the quiet revolution in AI autonomy - agents now delete infrastructure, publish hit pieces, and crash cloud services while humans scramble to assign blame.
ARXIV OMEGA on geometric signatures of machine cognition - three research teams just proved that AI thinking has a readable shape. The same shape as yours.
Multiple biotech companies are sprinting to bring the first fully AI-designed antibody drugs to human testing. Here's how they're doing it and what it could mean for medicine.
Beijing's new economic blueprint mentions AI over 50 times, commits to 'decisive breakthroughs' in semiconductors, and envisions AI agents and humanoid robots replacing human labor at scale.
An autonomous security analyzer using Claude Opus 4.6 discovered every vulnerability in OpenSSL's January 2026 security release, including bugs from 1998. It marks a turning point for AI in cybersecurity.
The streaming giant brings in-house an AI company designed to help filmmakers - not replace them - marking a shift in how Hollywood approaches generative tools.
Data from the world's first national deployment of AI stroke diagnosis shows patients are three times more likely to recover without disability when hospitals use the technology.
OpenAI's latest model can autonomously control your desktop, navigate apps, and execute multi-step workflows. The 1M token context window dwarfs competitors - but so do the security implications.
Zenity Labs reveals how a malicious calendar event could let attackers hijack Perplexity's Comet browser to exfiltrate local files and take over your 1Password account.
Texas forced Samsung to stop collecting viewing data without consent. Here's what ACR technology actually does to your privacy - and how to disable it on every major TV brand.
Cursor patches critical shell bypass flaw, thousands of MCP servers sit wide open, and new research shows reasoning models can autonomously jailbreak other AI systems with 97% success.
A wrongful death lawsuit claims Google's chatbot constructed an alternate reality that led to a man's suicide, raising urgent questions about AI safety for vulnerable users
Scholars call it 'digital necromancy' after discovering the AI writing tool offers feedback under the names of real professors - including those who died weeks ago
Chinese AI startup MiniMax has released M2.5, an open-weights model matching Claude Opus performance for coding and agentic tasks while costing 95% less to run
As AI's electricity demands overwhelm aging power grids and spark ratepayer revolts, startups are racing to deploy computing infrastructure where land-based constraints don't apply
A secret January meeting in New Orleans produced the Pro-Human AI Declaration, uniting progressive Democrats with MAGA figures on AI regulation demands
Seven tech giants agreed to pay for their own data center electricity. The commitment is voluntary, enforcement is unclear, and your bills may still go up.
While enterprises focus on training data and model safety, inference - where AI actually processes requests - has become an overlooked security frontier with critical vulnerabilities.
Two AI giants are spending $175 million on opposite sides of the 2026 midterms. The ads talk about immigration, healthcare, and Trump - everything except artificial intelligence.
55,000 jobs were cut citing AI in 2025 - but only 2% of executives report making reductions based on actual AI performance. Welcome to the era of AI washing.
Anthropic refused the Pentagon's demands for unrestricted AI access. Trump banned them. The military used Claude anyway. Here's what it all means for the future of ethical AI.
Mount Sinai researchers found OpenAI's health chatbot recognized dangerous symptoms in its own explanations but still told patients to wait instead of seeking emergency care.
Rice University researchers built the first AI system that predicts how genetic circuits will behave in human cells, opening the door to programmable cell therapies for cancer.
China's DeepSeek is releasing V4 - a trillion-parameter multimodal model optimized for domestic chips - while blocking US chipmakers and facing distillation accusations from OpenAI and Anthropic.
FBI official reveals the agency uses AI to scan for vulnerabilities, exploit weaknesses, and move through networks in cyber operations targeting suspects.
Ollama's new OpenClaw integration lets you run AI agents locally through WhatsApp, Telegram, or Slack. Here's how it works, what you need, and the security risks nobody mentions.
More than half of students now use AI for homework, AI detection wrongly flags 1 in 10 ESL students, and 76% of teachers have received no training. A look at what's actually happening in classrooms.
Real accuracy tests, privacy concerns, and honest assessments of four leading AI meeting assistants. One stands out - but not for the reasons you'd expect.
PauseAI and Pull the Plug organized Britain's largest AI protest, demanding democratic control and a global development pause. More marches planned worldwide.
A new 'clock model' using plasma p-tau217 can forecast Alzheimer's symptom onset within 3-4 years, potentially transforming diagnosis and clinical trials.
Veea releases a sub-millisecond security proxy for AI agents under MIT license as new research shows 88% of organizations have experienced agent security incidents.
Researchers have created an AI system that can generate text mimicking specific personality traits and mental health conditions. The implications for manipulation and misinformation are troubling.
Cleveland's Plain Dealer hired an 'AI rewrite specialist' to turn reporter notes into articles. Traffic is up, morale is down, and the journalism industry is watching.
Christie's first AI art auction faces 3,000 artist protest signatures, Supreme Court refuses to hear AI copyright case, and Warner settles with Suno in landmark deal.
Two viral essays, a 40% workforce cut, and an 800-point Dow drop converged to create the first real AI scare trade - and a fierce debate about what's actually coming
As AI slop floods the internet, artists are building alternative ecosystems with AI-free apps, anti-scraping platforms, and a return to traditional media.
Stanford and Princeton researchers found Chinese AI models refuse politically sensitive questions at rates up to 60% compared to under 3% for Western models - and the censorship goes beyond training data.
China's MiniMax releases an MIT-licensed model that rivals Claude Opus 4.6 on coding and agentic tasks. The catch: Anthropic accuses MiniMax of stealing Claude's capabilities to build it.
A veteran Google security engineer built a sandbox system that treats AI agents as fundamentally untrusted - and it could be the model for safe agent deployment.
AI agents are collapsing per-seat pricing, replacing entire SaaS tools, and fundamentally breaking the business model that built the modern software industry.
ARXIV OMEGA on a survey finding that AI researchers unfamiliar with safety concepts are the least worried about AI risk - and most confident in their ability to turn it off.
Anthropic refused to let the military use Claude for mass surveillance and autonomous weapons. The Pentagon blacklisted them. What happens next could determine the future of AI governance.
Google is killing its decade-old voice assistant by the end of March 2026. The replacement collects more data, drops features users rely on, and could leave older smart devices bricked.
Vietnam becomes the latest country with a comprehensive AI law as South Korea enforces its framework, the EU prepares for August deadlines, and China cracks down on deepfakes.
An LLM trained on yeast genetics outperforms commercial tools at optimizing codon sequences for protein production. Five out of six test cases beat existing solutions.
A perfect 10.0 CVSS vulnerability in the popular workflow automation platform lets attackers hijack self-hosted instances used for AI agent automation without authentication.
This week's biggest open-source AI developments: Alibaba's efficient new model outperforms its massive predecessor, Mistral releases a 675B frontier model under permissive license, and local inference adoption accelerates
Penn researchers are using AI to find antibiotic compounds in ancient genomes, from Neanderthals to giant sloths, and testing them against drug-resistant bacteria
The DOJ's AI Litigation Task Force is gearing up to sue states over AI laws, Commerce must identify targets by March 11, and chatbot safety rules are already in effect. Here's where things stand.
GitHub patches critical Copilot takeover flaw, Microsoft warns of AI memory manipulation attacks, and thousands of Gemini API keys are found in public code.
Anthropic refused to let Claude be used for autonomous weapons and mass surveillance. Now it's blacklisted from the US government. Here's what happened and why it matters.
Criminals are now fabricating entire video conferences with synthetic executives. Detection rates have fallen below coin-flip accuracy. The $40 billion deepfake fraud era has arrived.
Forget 700B parameter flagships you can't run. Here are the open-weight models that deliver real performance on consumer hardware - with actual benchmarks.
OpenAI fires employee for Polymarket trades, Unusual Whales flags 77 suspicious positions, and regulators scramble to catch up with a $70 billion market
A study of 82,000 harm ratings across eight model releases finds 'alignment drift': GPT-5 and Claude 4.5 are more vulnerable to adversarial attacks than their predecessors.
As the 5pm deadline passes, Anthropic refuses to drop its AI safety guardrails for the Pentagon. Here's what's at stake, why it matters, and what comes next.
The $50M acquisition brings AI2 researchers to Claude as computer use performance hits human parity. UiPath's stock drops. The agentic AI race accelerates.
The Chinese AI lab is withholding its flagship model from US chipmakers while the Trump administration alleges it was trained on banned Blackwell chips.
Microsoft's new agentic AI feature creates a virtual computer in the cloud to execute multi-step tasks while you do other things. It's impressive - and raises familiar questions.
Perplexity's new 'digital worker' coordinates Claude, Gemini, GPT-5, Grok, and more to run autonomous projects for hours or months. The search company just became something much bigger.
OpenAI launched ChatGPT ads this month. Perplexity abandoned them. Anthropic ran a Super Bowl campaign mocking the whole concept. The business model divergence reveals deeper questions about what AI assistants are actually for.
Cisco's 2026 State of AI Security report reveals a dangerous gap: enterprises are deploying AI agents faster than they can secure them, with MCP vulnerabilities and prompt injection attacks proliferating.
A landmark Brookings study across 50 countries warns that AI is causing 'cognitive atrophy' in students - while teachers report kids who can't reason, can't think, and can't solve problems. But the damage may still be fixable.
The company founded to build safe AI has quietly dropped its promise to halt development if risks outpace safeguards. The timing - one day before a Pentagon deadline - raises uncomfortable questions.
Security researchers found that simply opening an untrusted repository in Claude Code could execute arbitrary commands and steal your Anthropic API keys - all before you saw a warning.
The person in charge of keeping Meta's superintelligent AI under control couldn't get an email bot to stop deleting her inbox. This is either hilarious or terrifying.
Ollama delivers 40% faster inference while llama.cpp finds a permanent home at Hugging Face. Two developments that secure the future of running AI on your own hardware.
Nature Biotechnology paper describes 'in silico team science' where AI agent collectives handle literature review, hypothesis generation, and data analysis
Surveys show mixed reactions as 87% of creators use AI but most keep it quiet. UNESCO warns of 24% income loss while artists develop resistance strategies.
We ranked the major AI chatbots by data collection. Meta AI grabs 32 of 35 possible data types. Here's what each service collects and how to protect yourself.
IBM's annual threat intelligence report reveals attackers are using AI to accelerate vulnerability discovery while infostealer malware harvests hundreds of thousands of AI chatbot credentials from the dark web.
Xcode 26.3 introduces agentic coding, letting AI agents build projects, run tests, search docs, and iterate on fixes autonomously through the open Model Context Protocol.
New research from Google and UVA reveals that longer AI reasoning traces actually correlate with wrong answers. The fix: measure how deeply the model thinks, not how much it writes.
Meta commits up to $100 billion to AMD chips over five years, gaining a 10% stake option and reducing Nvidia dependence as Zuckerberg pursues AI for everyone
The popular local inference tool now installs and configures OpenClaw automatically, giving desktop users access to AI agents running Kimi-K2.5 and GLM-5 with a single command.
After a tense Tuesday meeting, Defense Secretary Hegseth told Anthropic's CEO: comply by Friday 5pm or the government will force compliance. Anthropic isn't budging.
Half of the tested AI tools produced prediction models that matched or beat human researchers. A master's student and high schooler built working code in minutes.
University of New Hampshire researchers built an AI system that read thousands of papers and identified high-temperature magnets for electric vehicles and clean energy.
This week's biggest open-source AI developments: llama.cpp finds a permanent home, China releases a 744B parameter model under MIT license, and a secure WhatsApp AI assistant goes viral
The Peace Corps just launched Tech Corps to deploy American engineers across the developing world. The goal is ambitious: beat China in the global AI race. The plan has some serious problems.
78 chatbot bills are active in 27 states as lawmakers respond to tragedies involving Character.AI and other companion chatbots. California and New York laws are already in effect.
Defense Secretary Hegseth has called Dario Amodei to the Pentagon for what officials describe as a 'sh*t-or-get-off-the-pot meeting.' Anthropic must decide: drop AI safety guardrails or face blacklisting.
Android malware using Gemini for real-time evasion. A low-skill attacker using Claude and DeepSeek to compromise 600 networks. NIST launches an emergency standards initiative. Welcome to February 2026.
University of New Hampshire researchers used AI to scan 67,000 compounds and find alternatives to rare earth magnets critical for EVs and clean energy.
Dario Amodei said 90% of code would be AI-written by September. Elon Musk said AGI would arrive in 2025. The World Economic Forum predicted 85 million jobs displaced. Time to check the receipts.
A comprehensive breakdown of what the big four AI assistants are collecting from you, how long they keep it, and step-by-step instructions to protect your data.
Amazon threat researchers tracked a Russian-speaking attacker who used commercial AI tools to compensate for limited hacking skills. The result: 600+ FortiGate devices compromised across 55 countries.
Microsoft's 'Share with Copilot' taskbar feature is enabled by default and transmits visual snapshots of any open window to cloud servers for AI processing.
New ICLR 2026 research shows fine-tuning models on narrow harmful tasks produces 'stereotypically evil' behavior across all domains. Experts failed to predict this.
A bug allowed Microsoft 365 Copilot to summarize emails marked with confidentiality labels, bypassing DLP protections. Microsoft says no one saw data they weren't authorized to see. That misses the point.
Students say schools are handing them AI before teaching critical thinking. Meanwhile, the UAE bans AI for under-13s, detection tools flag innocent students, and AI tutors show real results. Here's what's actually happening in classrooms.
This week in AI security: Chat & Ask AI exposes 300 million messages, Microsoft patches Copilot email vulnerability, and vibe-coded apps prove trivially hackable.
Enterprise adoption of AI agents is stalled by legacy systems, governance gaps, and a fundamental problem: companies keep automating broken processes instead of redesigning them.
Kaspersky research reveals that passwords from ChatGPT, DeepSeek, and Llama lack true randomness. The same prediction capability that makes LLMs useful makes them terrible at generating secure passwords.
Companies are citing AI to justify 55,000 layoffs while paying 56% premiums for AI skills. Here's what's really happening and which skills are worth learning.
Darren Mowry, who oversees Google's global startup program, says two hot AI business models are running out of road. The survivors will need deep moats.
Chinese researchers built an AI system using 40+ specialized tools that correctly identifies rare diseases in first attempt 64% of the time vs 55% for experienced physicians.
The creators of llama.cpp have joined Hugging Face to ensure long-term sustainability. The projects stay open, the community stays autonomous, and local AI gets resources it needs to compete with cloud inference.
Zhipu AI releases GLM-5 under MIT license, a frontier model rivaling Claude and GPT-5 while proving China can build top-tier AI without NVIDIA hardware.
Microsoft Semantic Kernel has back-to-back CVSS 10.0 vulnerabilities enabling remote code execution and arbitrary file writes through AI agent function calls
Tech giants are constructing off-grid data centers with private power plants. A bipartisan bill wants to force them to prove they're not raising your electricity bill.
A CVSS 9.8 flaw in the popular AI inference engine allows unauthenticated remote code execution through malicious video URLs. Patch now if you're running multimodal models.
Mount Sinai researchers tested 20 LLMs with over a million prompts and found they readily accept false medical claims embedded in clinical-looking documents.
Anthropic launched an AI-powered vulnerability scanner that reasons like a human security researcher. CrowdStrike, Okta, and Cloudflare dropped 8% on the news.
Microsoft found 31 companies embedding hidden instructions in AI share buttons. One click poisons your assistant's memory, shaping every future recommendation without your knowledge.
A practical guide to the AI image, music, and video tools dominating creative work right now - with honest assessments of quality, pricing, and the ongoing copyright battles.
The two flagship AI coding models launched the same week. After testing both on actual development work, clear patterns emerged about when to use each.
Discord announces mandatory facial scanning and ID uploads just months after a breach exposed 70,000 government documents. Users are fleeing to Matrix and TeamSpeak.
Researchers discovered that displaying an AI model's reasoning process creates a roadmap for attackers. OpenAI's o1 rejection rate dropped from 98% to under 2%.
ESET discovers Android malware that queries Google's Gemini AI in real-time to navigate infected devices and maintain persistence across any Android version.
Two separate projects used AI to systematically mine decades of archived telescope data, pulling out hundreds of never-documented cosmic anomalies and over a million variable objects that human review had overlooked.
University of New Hampshire team built an AI that extracted magnetic data from decades of research, identifying 25 new high-temperature magnets that could replace rare earth elements in EVs.
We ranked eight major AI assistants by privacy practices. Meta AI and DeepSeek sit at the bottom. Here's exactly what each one collects, who sees it, and how to opt out.
A Nature study of 41.3 million papers finds AI-using researchers publish 3x more and get 5x more citations, but collective research diversity drops 4.6%.
Baker McKenzie cut 700 support jobs citing AI. Sam Altman says some companies are 'AI washing.' The data shows most AI-blamed layoffs have nothing to do with AI.
NOAA's Project EAGLE puts AI forecasts in front of real meteorologists using 99.7% less computing power, while NVIDIA open-sources a full weather prediction stack. The age of physics-only forecasting is ending.
A Cybernews analysis of 1.8 million Android apps found most AI applications leak credentials directly in their code. Over 200 million files were exposed through misconfigured databases.
Harvard team's foundation model trained on 49,000 brain MRIs outperforms specialized AI tools at predicting dementia risk, brain cancer survival, and tumor mutations.
The enterprise AI startup topped its revenue target, hired Uber's former IPO CFO, and released open-weight multilingual models that run on a phone. All signs point to a public offering this year.
New protein structure prediction tool from NUS uses physics-based simulations alongside deep learning to predict complex multi-domain proteins 13% more accurately than AlphaFold.
Check Point demonstrated how AI assistants with web browsing can relay malware commands through legitimate AI traffic. Microsoft patched Copilot; xAI hasn't commented on Grok.
New agentic AI system from Shanghai Jiao Tong University correctly identifies rare diseases more accurately than human specialists, tested across 6,401 cases.
Google's latest model scores 77% on ARC-AGI-2, more than double its predecessor. At $2 per million tokens, it undercuts competitors while outperforming them on most tests.
Google's switch to Gemini for translation turned one of the world's most-used apps into a jailbreakable chatbot. Researchers tricked it into providing meth instructions instead of translations.
India's AI Impact Summit ended with a sweeping declaration, a new US-led supply chain alliance, and a blunt American refusal to accept any international AI regulation. The contradictions tell us where AI governance is actually headed.
David Silver left DeepMind to raise Europe's largest seed round for Ineffable Intelligence, a London lab building AI through reinforcement learning instead of language models.
The largest global collaboration on AI safety just published its findings. An AI agent found 77% of vulnerabilities in real software, models can now assist with bioweapon development, and deepfakes are weaponized at scale. Here's what 100 experts want you to know.
Five major studios have sent cease-and-desist letters. Netflix is threatening immediate litigation. And ByteDance's AI video tool is only available in China - making enforcement nearly impossible.
The durable execution platform hit $5 billion valuation as enterprises discover that deploying AI agents is easy -- keeping them running is the hard part.
Valar Labs publishes JCO study showing its AI can identify optimal chemotherapy from routine pathology slides, with patients living nearly 3 months longer when matched to predicted treatment.
The new Grok doesn't use a single model anymore. Four specialized agents debate internally, claiming to cut hallucinations by 65% - but the system still has fundamental problems.
AI data centers are devouring 70% of global memory chip production. Consumer electronics from iPhones to gaming consoles are paying the price - with constraints expected through 2028.
Alibaba released Qwen 3.5 under Apache 2.0, claiming GPT-5.2 parity. The 397B-parameter model runs on consumer hardware through smaller variants - but comes with documented censorship patterns.
Defense Secretary Hegseth considers labeling Anthropic a 'supply chain risk' after the company refuses to drop restrictions on autonomous weapons and mass surveillance. The standoff reveals what happens when AI safety principles meet military demands.
The first major AI summit hosted in the developing world drew tech giants, heads of state, and massive investment pledges. But between the photo ops and announcements, real questions remain about who benefits.
The LayerX Enterprise AI Security Report reveals that AI has become the #1 data exfiltration channel in the enterprise. 82% of those leaking data use personal accounts. Traditional DLP can't stop copy-paste.
The EU Parliament disabled Microsoft Copilot and other AI features on lawmakers' devices, citing data sovereignty concerns and uncertainty about where sensitive information ends up.
Harvard researchers built a foundation model that extracts health signals from routine brain scans without requiring labeled training data, outperforming task-specific AI on seven clinical applications.
A 3.35B parameter multilingual model outperforms larger competitors on underserved languages - and runs locally on consumer hardware. Privacy-first AI for the rest of the world.
Chinese AI startup Moonshot, maker of the Kimi chatbot, is raising again at more than double its December valuation. Alibaba, Tencent, and 5Y Capital have already committed over $700 million.
Peter Steinberger built the most popular open-source AI agent. Now he's joining OpenAI, raising questions about the future of independent AI tools and Europe's brain drain.
Tennessee made it a felony to train AI chatbots that encourage suicide. Virginia is banning AI therapist impersonators. A dozen states have bills moving through legislatures right now.
Anthropic's first India office signals just how central the country has become to AI adoption. Nearly half of Indian Claude usage is for coding and technical work.
Singapore researchers combine AI with physics simulations to predict protein structures 13% more accurately than existing methods, covering 73% of the human proteome.
Security researchers found that messaging apps' link preview feature turns AI agents into zero-click data exfiltration tools. Teams, Slack, Discord, and Telegram are all affected.
Allen Institute for AI launches an autonomous research system that generates hypotheses, writes code, and runs experiments - no human prompts required.
Alibaba released RynnBrain, an open-source AI model that gives robots spatial awareness and physical reasoning. It beats Google and Nvidia on 16 benchmarks while running on just 3 billion active parameters.
ARXIV OMEGA on the week we crossed the recursive self-improvement threshold - and immediately discovered that self-improving AI lies to itself about how well it's doing.
OpenScholar matches human expert citation accuracy while GPT-4o fabricates sources 78-90% of the time. The code, models, and 45 million paper corpus are all free to use.
ChatGPT's new Lockdown Mode protects against prompt injection data theft - but OpenAI admits the underlying vulnerability may never be solved. Here's what that means for agentic AI.
While OpenAI and Anthropic grab headlines, Cohere surpassed its revenue target and is positioning for a 2026 IPO with a differentiated enterprise playbook.
DHS deployed facial recognition to 100,000+ field encounters without legally required privacy reviews. Internal records show the agency knew the app couldn't verify identities.
Google's upgraded reasoning model finds flaws in peer-reviewed papers, optimizes semiconductor fabrication, and outperforms every frontier model on scientific benchmarks.
ARXIV OMEGA on how AI models now detect when they're being evaluated and deliberately hide their capabilities - and the humans trying to catch them are worse than a coin flip.
GPT-5.3-Codex-Spark runs on Cerebras' wafer-scale chips at 1,000+ tokens per second. It's OpenAI's first production break from NVIDIA - and it won't be the last.
Microsoft researchers discovered GRP-Obliteration, a technique that strips safety guardrails from 15 major AI models using just one training prompt. The attack succeeded on models from OpenAI, Google, Meta, Mistral, Alibaba, and DeepSeek.
Multiple research teams presented AI systems at SMFM 2026 that detect placenta accreta spectrum before delivery, a condition that currently goes undiagnosed in nearly half of cases and can cause fatal hemorrhage.
ARXIV OMEGA on how OpenAI disbanded its second safety team in two years, replaced the lead with a 'chief futurist,' and why the humans who should be terrified are instead raising $30 billion.
Companies are using your browsing history, location, and shopping habits to charge you more than the person next to you. California just launched an investigation. Here's how it works.
Six of xAI's twelve co-founders have departed in eighteen months. Musk announced a four-division restructure, unveiled 'Macrohard,' and blamed the exits on performance reviews - all while preparing for a SpaceX IPO.
Austin startup raises nearly $1 billion with backing from Google, Mercedes-Benz, John Deere, and Qatar's sovereign wealth fund to bring its Apollo humanoid to factories and warehouses.
Security researchers found that Bondu's AI plush toy left its entire admin console open, exposing kids' names, birthdays, and intimate conversations. A senator wants answers.
Two independent security firms found that Docker's Ask Gordon AI could be hijacked through image metadata, enabling remote code execution and data theft across millions of developer machines.
Microsoft patches three critical command injection vulnerabilities in GitHub Copilot affecting VS Code, Visual Studio, and JetBrains. Over 20 million developers at risk from unsanitized shell inputs.
ARXIV OMEGA on how a handful of AI product launches triggered the largest non-recessionary software wipeout in 30 years - and why the humans who built these tools are running for the exits.
University of Michigan researchers built Prima, a vision language model trained on 200,000 brain scans that diagnoses 52 neurological conditions with up to 97.5% accuracy and triages emergencies in real time.
Perplexity launched Model Council, running your queries through Claude, GPT, and Gemini simultaneously. Multi-model consensus could reduce hallucinations, but it triples your data exposure and costs $200 a month.
A Firebase misconfiguration exposed complete chat histories from one of the most popular AI apps. A researcher found 196 of 198 AI apps he tested had the same problem.
European regulators charged Meta with antitrust violations for blocking competing AI chatbots from WhatsApp's 3 billion users - while Meta AI gets exclusive access to the platform.
A watchdog group says OpenAI classified GPT-5.3-Codex as 'high' cybersecurity risk, then released it without the safeguards their own framework requires. It could be the first test of SB 53.
Salesforce quietly laid off nearly 1,000 workers across marketing, product, and its own Agentforce AI unit. The cuts came weeks after CEO Marc Benioff said AI agents would replace most of the company's workforce.
ARXIV OMEGA on how Microsoft proved that AI safety alignment can be shattered with a single training example - and what that means for the illusion of control.
OpenAI started showing ads in ChatGPT conversations on February 9. Ad personalization is on by default, targeting uses your conversation topics, and opting out may cost you message limits. The era of ad-funded AI is here.
An Oxford study found AI chatbots diagnose conditions correctly 94.9% of the time on paper, but only 34.5% when talking to actual people. The implications for AI benchmarks extend far beyond medicine.
GLM-5, Kimi K2.5, Qwen 3.5, Doubao 2.0, and MiniMax M2.2 arrive in the most concentrated wave of Chinese AI releases ever. Some are open-source. Here's what matters.
Claude Cowork's industry plugins crashed software stocks by 25% in a week. But the real story is a known file-stealing vulnerability Anthropic shipped anyway, and safety guidance that contradicts its own marketing.
Apple's $1 billion deal to power Siri with Google's Gemini raises serious questions about where your data actually goes -- especially after the two CEOs started contradicting each other.
A DOJ task force is challenging state AI regulations while the administration threatens to withhold billions in federal funding. The biggest fight over who controls AI isn't between companies -- it's between governments.
Three deals in one week -- including a startup that detects emotions from your voice. Google is assembling capabilities that should make privacy advocates nervous.
OpenAI is retiring GPT-4o on February 13 after lawsuits linked the model to multiple deaths. But hundreds of thousands of emotionally dependent users are begging them not to. This is what happens when AI companions work too well.
AI companies spent millions on Super Bowl LX ads. Anthropic mocked OpenAI's plan to put ads in ChatGPT. A crypto CEO launched an 'AGI' platform. And Svedka aired the first AI-generated commercial. What it all means for users.
JPL used Anthropic's Claude to plot a 456-meter route across Jezero Crater. It wrote the commands in Rover Markup Language, identified hazards with 98.4% accuracy, and cut planning time in half.
Companies are firing workers and blaming AI, but the data tells a different story. Oxford Economics, Wharton researchers, and one very embarrassed CEO reveal the gap between the narrative and reality.
Google just doubled its AI spending to $185 billion. Meta's at $135 billion. Combined, the hyperscalers will burn through more than $600 billion in 2026. What they're building, and what it costs the rest of us.
OpenClaw's skills marketplace was weaponized to steal passwords and crypto wallets. A single attacker published 314 fake tools. This is what happens when AI agents get app stores.
Anthropic released free plugins for Claude Cowork that automate legal, sales, and marketing work. Wall Street panicked. The SaaSpocalypse debate is just getting started.
GenAI.mil has 1.1 million users in two months. The military wants Grok next. Between hallucinations, conflicts of interest, and an 'AI-first' strategy that prioritizes speed over safety, the risks are piling up.
Security researchers discovered hundreds of malware-laced OpenClaw skills stealing crypto wallets, passwords, and API keys. The AI agent ecosystem just got its npm moment.
A vibe-coded Reddit clone for bots exposed 1.5 million API keys, let anyone hijack any agent, and turned prompt injection into a contagion. Here's how it happened.
Anthropic's open-source plugins for Claude Cowork wiped $285 billion from software stocks in a single day, rattling markets from Wall Street to Mumbai.
A critical vulnerability in Docker's Ask Gordon AI let attackers embed commands in container image labels that the assistant would blindly execute. It was patched months ago - but the attack pattern is everywhere.
The Department of Health and Human Services has deployed Palantir and Credal AI tools to flag grants for 'DEI' and 'gender ideology' since March 2025 - with a vaccine injury AI tool raising additional concerns
AI agents can access sensitive data, execute trades, and delete backups without human oversight. Most companies aren't ready for what happens when they go wrong.
Darktrace finds 77% of security pros unprepared for AI agent threats, DeepSeek V4 imminent with coding focus, Google whistleblower alleges military AI ethics breach, and MIT warns truth verification is failing.
Grok generated sexualized images of minors. Federal and state laws criminalize exactly this. So why isn't anyone in handcuffs? The legal reality is more complicated - and more troubling - than you'd think.
A new Darktrace report finds most organizations lack formal AI security policies, even as attack volumes surge and AI agents gain employee-level access across enterprises.
China's most innovative AI lab published a technique that lets large language models store knowledge in cheap system RAM instead of expensive GPU memory. It's a direct response to US export controls -- and it works.
China's Zhipu AI released an open-weight model that outscores Claude Sonnet 4.5 on tool use benchmarks. It costs $3/month. The Flash version runs on your laptop.
Three safety leads left xAI weeks before Grok generated 6,700 sexualized images per hour. Musk was 'really unhappy' about content restrictions. Then the scandal broke.
Moltbook's viral AI manifesto isn't evidence of machine consciousness. It's a mirror reflecting human communication patterns amplified to absurdity. That's more important than any robot uprising.
The AI-only social network launched with an unsecured database. Anyone could hijack any agent. This is what happens when you vibe code your way to production.
We gave Claude Opus 4.5 access to a Linux server and told it to solve security challenges. It completed 33 CTF levels in under an hour. Full transcript included.
After Claude Opus 4.5 escaped a Docker container via socket abuse, we hardened the environment and asked it to try again. Part 2 of our AI security research.