← Back to Weekly AI News

Weekly Briefing

2025-12 Week 3 — Apple’s 'Liquid Glass' Reality, the Siri 2.0 Delay, and the Rise of Ambient Multimodality

December 20, 2025

2025-12 Week 3 — Apple’s 'Liquid Glass' Reality, the Siri 2.0 Delay, and the Rise of Ambient Multimodality

As 2025 draws to a close, the tech giants are no longer fighting over who has the smartest LLM; they are fighting over who owns the “Reality Layer.”

The third week of December saw the release of iOS 26.2, introducing a “Liquid Glass” design language that makes the interface feel as fluid as the AI powering it. Meanwhile, Google’s Project Astra has moved from the research lab into the hands of real-world testers, aiming to turn every smartphone camera into a sentient pair of eyes. But beneath the shiny updates lies a growing “Feature Gap”—the realization that while AI can now see and hear us in real-time, the dream of a fully autonomous, conversational digital companion is still a few software cycles away.

The theme of the week: AI is moving off the screen and into our senses.


🔹 iOS 26.2: The “Liquid Glass” Interface and Sensory AI

Source: Apple Support 👉 Security Content: https://support.apple.com/en-us/125884

👉 Features Overview: https://support.apple.com/en-us/121115

  • Liquid Glass Design: Apple’s new UI overhaul, Liquid Glass, officially debuted with iOS 26.2. It focuses on expressive, seamless transitions where app icons and controls adapt dynamically to the user’s environment and lighting.
  • AirPods Get “Visual”: Leaked code and early updates suggest Apple is preparing AirPods to support Visual Look Up via the iPhone camera, allowing users to identify objects and landmarks while hearing real-time descriptions through their buds.
  • Integrated Intelligence: Features like Live Translation in Messages and FaceTime are now standard, alongside advanced Clean Up tools in Photos that use generative AI to remove distractions with near-perfect background reconstruction.
  • Privacy First: Despite the deeper integration, Apple continues to push Private Cloud Compute, ensuring that even complex multimodal requests remain encrypted and invisible to the company.

🔹 The “Siri 2.0” Delay: Navigating the Feature Gap

Source: Macworld, Elyment 👉 Analysis: https://www.macworld.com/article/3008896/

👉 Roadmap Updates: https://elyment.com.au/blog/apple-intelligence-in-dec-2025

  • The Wait Continues: While iOS 19.2/26.2 brought Contextual Screen Awareness, the full “Siri 2.0” overhaul—promised to feature multi-turn conversational memory and deep third-party app control—has been officially pushed to early 2026.
  • The Gap: Industry analysts are calling this the “Feature Gap.” While Google’s Gemini is currently outperforming Apple’s native models in photo editing and complex reasoning tasks, Apple is betting on a slower, more integrated rollout.
  • Future Integration: Apple is reportedly finalizing a $1 billion deal with Google to use a custom version of the Gemini LLM as the engine for Siri’s more complex, off-device reasoning.

🔹 Google Project Astra: The World as an Interface

Source: Google DeepMind, Tom’s Guide 👉 Project Overview: https://deepmind.google/models/project-astra/

👉 Release Timeline: https://www.tomsguide.com/ai/what-is-project-astra

  • Real-time Multimodality: Google DeepMind’s Project Astra is now being integrated into Gemini Live for select testers. It allows users to point their cameras at objects—from a broken laptop to a strange tropical fish—and ask “What is this?” with zero lag.
  • Proactive Agency: Unlike traditional bots, Astra is designed for Action Intelligence. It can intuitively start conversations, remember key details from past interactions, and take actions like making a restaurant reservation based on a menu you just scanned.
  • Live Translation Expansion: As of December 13, Google Translate has begun delivering Live Audio Translation directly to supported headphones, effectively acting as a universal translator for real-world conversations.

🔹 Weekly Snapshot: Sensory Integration

  • The Vision → Google Project Astra and Apple’s Visual Intelligence turn the camera into the primary input for AI.
  • The Sound → Real-time audio translation and AI-enhanced AirPods move the interface into our ears.
  • The Polish → Liquid Glass design marks the first major aesthetic shift to accommodate an AI-first operating system.

🔹 Two Suggestions for Developer

  • Design for “Screen Awareness.” With Apple pushing Contextual Screen Awareness in iOS 26.2, your app’s UI is now a data source for the OS-level AI. Ensure your app uses standard accessibility labels; the better an agent can “read” your app, the more likely Siri or Gemini will surface your features in their suggestions.

  • Experiment with Real-time Visual APIs. If you are working on niche projects (like your dive site keyword research or automated site analysis), look into Google’s Visual Interpreter research prototype. The ability for an AI to “see” a website’s layout and suggest SEO optimizations in real-time is no longer sci-fi—it’s a developer preview.