Practical AI Lands on iOS with Real Use Cases 📱

Apps Tap Apple AI for Smarter Features

In partnership with

You’re invited to the world’s largest email marketing conference.

Become an email marketing guru at the GURU conference. It’s two days full of all things email marketing. Learn more about newsletters, deliverability, design trends, AI, and what NOT to do with email.

What you can expect:

  • Keynote Speakers: Nicole Kidman, Amy Porterfield & more!

  • The latest digital trends in email marketing & how to increase performance.

  • Networking opportunities - each day!

  • Dj’s, dance contests (judged by Lance Bass, yes for real), breaking world records & MORE!

Spots are limited. It’s VIRTUAL. It’s FREE. It’s time to become an email marketing GURU. Join 25,000+ marketers on November 6th & 7th. Don’t miss out!

Hey there, Tech Trailblazers! 🤖✨

Welcome, and thanks for reading this week.

A&O Shearman maps how AI is already reshaping drug discovery and diagnosis, and why safety, privacy, and bias demand clear rules to keep patients protected.

We preview the EU AI Act’s risk tiers and upcoming dates, the FDA’s review and authorization of more than 1,200 AI- or ML-enabled medical devices, the WHO’s S.A.R.A.H. prototype, and the authors’ call for scenario-based assessments, so that innovation and patient trust can move in tandem.

Let’s take a look.

📰 Upcoming in this issue

  • ⚖️ AI in Healthcare: Legal and Ethical Frontiers

  • 🔐 Google’s AP2 Secures AI Agent Payments

  • 🤝 AI, the New Partner in Thought Leadership

A&O Shearman maps clinical AI risks from data consent to liability. The article outlines practical guardrails leaders use to deploy safely and credibly.

Key Takeaways:

  • 🛡️ Data Law Compliance: Clinicians and vendors align with privacy regimes and sector rules, securing consent, minimization, and lawful bases for training data.

  • 🔎 Transparency and Explainability: Providers document model design and limitations so patients and regulators understand outputs, enabling oversight, appeal, and informed consent.

  • 🧪 Bias and Safety Monitoring: Teams test for bias, validate performance on local populations, and run post-market monitoring to catch drift and harms early.

  • 📜 Accountability and Liability: Clear allocation of responsibilities, audit trails, and human-in-the-loop controls manage negligence risk and clarify who answers when AI decisions cause harm.

🔐 Google’s AP2 Secures AI Agent Payments

Google unveils the Agent Payments Protocol, an open standard for AI agents to pay on your behalf. Backed by more than 60 partners, it promises consent proofs and auditable receipts.

Key Takeaways:

  • 🌐 Open Standard: AP2 defines how agents initiate, authorize, and settle purchases, reducing fragmentation across wallets, gateways, and merchant systems.

  • ✅ Consent and Auditability: Protocol captures user intent, signs transactions, and issues verifiable receipts, so buyers and merchants can dispute less.

  • 🤝 Industry Backing: Over 60 companies support AP2, including card networks, payment processors, and commerce platforms.

  • 🪙 Cards and Stablecoins: Works with credit and debit rails and stablecoins like USDC, enabling flexible settlement choices for global use cases.

🤝 AI, the New Partner in Thought Leadership

A Fortune 100 CEO used AI to synthesize cross-industry debates from the last 30 days in an hour. It helped her address 50,000 employees with clarity.

Key Takeaways:

  • 🗣️ From Broadcast to Dialogue: Leaders use AI to listen at scale, parsing reports and sentiment to join real-time conversations.

  • 🧠 Sharper Synthesis: AI compares patterns across domains, helping connect ideas and stress-test messages before high-stakes audiences.

  • 🛠️ Practical Playbook: Prompts scan horizons, connect dots, and simulate skeptics, turning drafts into audience-ready narratives.

  • ❤️ Human Still Leads: AI augments perspective, while leaders choose the values, framing, and empathy that build trust.

Why It Matters

Regulation is now a key factor in determining speed and scale. Divergent frameworks across the EU, U.S., UK, and China raise compliance costs and legal risk if teams deploy without a plan, which can slow launches and expose reputations.

Using scenario-based assessments and preparing for EU AI Act obligations, while tracking FDA pathways and WHO guidance, helps teams ship safer tools faster, avoid costly rework, and protect both patient outcomes and brand credibility.

Until our next issue,

Samantha Vale
Editor-in-Chief
Get Nerdy With AI

P.S. Interested in sponsoring a future issue? Just reply to this email and I’ll send packages!

How was today's edition?

Rate this newsletter.

Login or Subscribe to participate in polls.