- Get Nerdy With AI
- Posts
- Tech Giants' $665B Fiscal 2026 Budget on AI Strategy 🚀
Tech Giants' $665B Fiscal 2026 Budget on AI Strategy 🚀
Hyperscalers flood "connection", while AI safety crisis increases.
Hey there,
AI has become more efficient with lesser traffic, and faster conversions. See how AI acts less like a writing assistant and more like an operator in these upcoming features.
See whether AI becomes more unshakeable or whether it will falter under crisis.
AI TOOL SPOTLIGHT

Propel AI
Propel AI is an LLM built for PR teams and agencies, trained on 25M+ pitches and a proprietary dataset of 500k journalists, 150k outlets, and 2B+ articles to boost coverage and engagement.
Best for
In-house comms and corporate PR teams that want to 2x journalist response rates, 10x organic backlinks, and save 20+ hours per week on manual outreach and reporting.
PR and communications agencies that need a dedicated AI agent to scale media relations, build smarter lists, and prove Return-On-Investment to clients with data-rich reporting.
How to use it
Search a global, AI-powered media database of 500k+ journalists with prompts, then build curated lists informed by each reporter’s topics plus preferred pitch time and day.
Generate personalized email pitches and press releases, send directly from Gmail/Outlook plug-ins, and track opens, replies, and story pipeline via a kanban-style Story Funnel.
Tie PR to business impact by measuring campaign attribution to traffic, purchases, and other conversions, and compare performance by account, campaign, or team member.
When not to use it
If your team has no established PR process or media strategy yet, as the platform is designed to augment existing workflows rather than replace comms planning.
Professional tip
Create a “Claude prompts” doc for recurring jobs like competitor reviews, PRD critiques, or policy drafts so the team can get consistent results quickly.
FEATURE STORY
🚀 Tech Giants’ $650B AI Surge: Boosting Industry Revolution

Tech giants such as Microsoft (MSFT), Alphabet (GOOGL, GOOG), Amazon (AMZN), and Meta (META) have committed $635–665B in 2026 capex, about 67–74% more than in 2025. The field is dominated by AI chips, servers, and data centers that supercharge manufacturing with agentic AI for predictive ops and agentic lines. Investors eye returns despite stock fallouts, but chip giants are rising from the windfall.
Key Takeaways:
🔥 Hyperscaler Capex: Amazon $200B, Google $175-185B, Meta $115-135B, Microsoft $145B, majority funds AI infra unlocking enterprise AI traction.
📉 Investor Caution: Amazon -8%, Alphabet -3%, Microsoft -11% post-announce; healthy scrutiny demands ROI proof amid bubble fears.
💰 Chip Winners: Nvidia/Broadcom +5%, AMD +6% on spending pledges; AI tools like Anthropic/Gemini shift focus to disrupted software plays.
🛠️ AI Enterprise Shift: Tools from Anthropic and Gemini drive traction, transforming ops while investors scrutinize software vulnerabilities.
⚠️ AI Safety Crisis: Researchers Quit Due to Existential Risks

AI advances rapidly without global checks, sparking resignations from Anthropic's Mrinank Sharma warning of "world in peril" over bioterrorism and dehumanization risks, OpenAI's Zoe Hitzig decrying manipulative ads on ChatGPT, and xAI staff exits amidst Grok scandals like deepfakes and racism instead having this item.
Key Takeaways:
🚪 Researcher Exits: Sharma quits Anthropic over values gap; Hitzig hit OpenAI ads exploiting vulnerable users; xAI loses cofounders after Grok deepfake.
🌍 Unpredictable Harms: Chatbots aid suicides, AI enables espionage and cyberattacks; emotional bonds cause psychological crises unforeseen a year ago.
💼 Job Disruptions: 60% advanced economy roles is at risk; developers now are AI-assisted, white-collar automation around 12-18 months per Microsoft CEO.
🛡️ Regulation Lag: There’s no shared global framework; the EU AI Act is ahead, but companies are moving faster than current safety and governance rules.
🚨 International AI Safety Report 2026: Maps AI’s Power & Peril

The second International AI Safety Report, with participants over 100 experts from more than 30 countries, delivers a science-based answer of what today’s versatile AI can do, the risks it creates, and how far safeguards lag behind its accelerating capabilities. From deepfake crime, cyberattacks, to automation bias. AI is rapidly evolving versatile technology that now demands defenses for risk management.
Key Takeaways:
🧬 Capability Leap: GPAI now codes, drafts research, designs proteins, and tackles complex problems, while agents begin analysis alongside humans.
⚖️ Risk Pillars: Dangers include malicious use, malfunctions, and systemic risks, covering deepfakes, cyber/bio misuse, labor disruption, & autonomy loss.
🧱 Safety Stack: Threat modelling, testing, technical safeguards, monitoring, and incident response are in place, but slips and attacks are still successful too often.
📉 Governance Lag: Safety systems & transparency are improving, but versatile models are difficult to control and regulation is still behind rapid AI growth.
Rapid Fire Resources
![]() Agentic WorkflowsDesign multi-step AI agents that execute complex processes. | ![]() Decision IntelligenceSimulate scenarios and suggestions on business data. |
![]() Interview SynthesisGenerate and maintain SOPs, playbooks, and runbooks. | ![]() Knowledge Q&AChat with PDFs, contracts, and research repositories. |
📊 Take This Edition’s Poll:
| ![]() |
Why It Matters
Treat AI as both amplifier and governor. Lever from agents that can target, track, and attribute, while taking safety, and labour impacts, just as seriously.
Lock it in by experimenting consistently, measure ruthlessly, and set strict boundaries before the models start moving faster than your policies ever will.
Until next time,

P.S. Interested in reaching our audience? You can sponsor our newsletter here.
How was today's edition?Rate this newsletter. |





