AI Hype, Wars, and Security Flaws: Is the World Broken or Just Being Rebuilt?

AI Hype, Wars, and Security Flaws: Is the World Broken or Just Being Rebuilt?

16 min read
Ai Security Personal

A few weeks ago, in an earlier post , I wrote that the world is changing faster than most people think. Today I want to make that thought sharper.

In conversations with customers and teams, the same question keeps coming back: how much of this is real progress, and how much is just noise with good marketing behind it? If you have followed the news since the start of 2026, it is easy to feel as if everything is shifting at once. OpenAI, Google, Anthropic, and xAI are in a public race. Humanoid robots are moving from demos into factories. Wars and geopolitical pressure are suddenly affecting chip and gas supply chains. At the same time, new security flaws keep appearing, and every new tool brings more productivity along with more rights, more data flows, and more attack surface.

For people whose job is to build secure infrastructure and keep employees operational in an increasingly uncertain digital world, this does not feel like ordinary tech news. It feels like time compression. The question is no longer just which model benchmarks better. The real question is this: is the world actually broken, or are we watching the beginning of a full-scale transition?

My answer right now is simple: the world did not suddenly break. But it is being reordered with a speed and force that overwhelms many people mentally, economically, and from a security perspective.

In short:

  • What we are seeing looks less like collapse and more like a rushed rebuild of power, infrastructure, and trust.
  • Security remains the base layer, because AI running on insecure systems mostly means faster mistakes and larger damage.
  • The real question is not only which model is ahead, but who controls chips, compute, platforms, data, and supply chains.

What Is Really Accelerating

What matters here is not a single model launch, but the interaction of five forces:

  • enormous amounts of capital
  • scarce compute resources
  • geopolitical power games
  • media attention as a strategic lever
  • the technical reality that software is never truly finished and never truly secure

When all five move at the same time, you get a feeling of permanent acceleration. That is exactly what 2026 looks like. OpenAI closed another massive funding round on March 31, 2026. Anthropic announced its Series G earlier on February 12, 2026. xAI raised fresh money at the start of January. Google, meanwhile, does not even need dramatic funding headlines because it can subsidize its AI push through ads, cloud, hardware, and existing market power. These are no longer ordinary software stories. They are signals of an infrastructure struggle.

That is where the difference between hype and structure begins for me. Hype is loud. Structure remains. What matters is not only who has the best demo, but who controls models, compute, distribution, hardware, and trust at the same time.

Who Is Playing Which Game

If you look at the major AI players soberly, they are not all playing the same game. Those differences are exactly what make the market interesting and relevant from a security perspective.

OpenAI

OpenAI owns distribution. Many people do not say, “I use an LLM.” They simply say, “I use ChatGPT.” With the idea of a “unified AI superapp” and an ever denser product layer made of Codex, connectors, file libraries, deep research, and related features, OpenAI is trying to grow from the consumer side directly into day-to-day work. From a security perspective, that matters because a chat window quickly becomes a concentration point for identities, sessions, files, plugins, and agent permissions.

Anthropic

Anthropic is playing much more through enterprise trust, developer proximity, and a security narrative. Claude Code, Computer Use, Cowork, and the focus on controllable work models for companies make Claude feel less like a mass product and more like a trust product. In my view, Anthropic is selling more than model performance. It is selling the story that companies can let Claude closer to source code, workflows, and sensitive decision spaces. I covered Mythos and Project Glasswing in more detail in a separate post .

Google

Google is playing the broadest and probably the toughest game. On the Gemini side you can see an entire product continent that stretches from Workspace to Pixel to DeepMind. With Google Vids, AI is being inserted into normal work surfaces rather than being shown only as a flashy demo. At the same time, Google is one of the few players with significantly more control over its own compute destiny thanks to Ironwood, its seventh TPU generation. That is why I think Google is still the most underestimated company in this race.

xAI

xAI appears quieter, but it is not small. With Colossus, Grok Business, Grok Enterprise, and the now official connection to SpaceX, xAI is also trying to occupy infrastructure and business in a serious way.

What I find interesting here is not just the model, but the datacenter thesis behind it. xAI describes Colossus as a kind of gigafactory of compute: built in 122 days, doubled to 200,000 GPUs in another 92 days, with a roadmap toward even more capacity. To me, that looks less like a classic software launch and more like an attempt to bring a strategic bottleneck under direct control very early.

As a market observation, this strongly reminds me of a pattern you have seen before in Musk companies. Tesla built out the Supercharger network years before many competitors fully understood the strategic value of charging infrastructure. With the lithium refinery in Texas, Tesla then moved deeper into a critical upstream layer of battery production. xAI seems to be trying something similar on the compute side: not only training models, but shaping the scarce infrastructure those models depend on. By the time competitors realize how important that layer is, the advantage often lies not only in technology, but in time that has already passed.

That does not automatically make xAI better. But it does make the company more strategically flexible than many people currently give it credit for.

Apple

Apple is more symptom than driver in this picture. The fact that Apple will use Google’s Gemini for the revamped Siri stack makes it very clear that even massive platform power does not guarantee a model lead. For users and security teams, this does not simplify anything. The more on-device promises, private-cloud-compute messaging, and external model cores are mixed together, the less clear it becomes where data, context, logs, and decisions actually end up.

Why Anthropic Keeps Showing Up In The Headlines

If you compress the last four months around Anthropic, you get an unusually dense stream of news: Claude Opus 4.6, Claude Sonnet 4.6, the Series G round, the Vercept acquisition, new partnerships, the Anthropic Institute, 100 million dollars for the Claude Partner Network, the Mythos leak at Fortune, the Claude Code leak at Bloomberg, IPO speculation, and then Project Glasswing on April 7.

That does not automatically prove some hidden PR choreography. But it does show how consistently Anthropic is turning visibility into market position in 2026. For a company that is simultaneously selling trust, enterprise readiness, and a possible IPO horizon, visibility is not a side effect. It is part of the game.

Humanoid Robots Are No Longer A Footnote To The AI Wave

While companies such as Anthropic, OpenAI, and Google fight over models, headlines, and software workflows, the same capabilities are also moving into the physical world. That is still underestimated, because many people still think of AI primarily as chat windows, image generators, and coding assistants.

Google DeepMind now presents Gemini, Veo, Imagen, Lyria, and Gemini Robotics side by side. At the same time, Boston Dynamics and Google DeepMind announced a partnership around Atlas and Gemini Robotics in January 2026. This is more than a nice CES moment. It shows that the logic of foundation models is slowly leaving the browser and moving into physical systems.

We are not in a world where humanoid robots stand next to us everywhere tomorrow. But we are clearly in a phase where factories are testing, research and hardware are moving closer together, and robotics is no longer being imagined without AI. Anyone who talks about the future of work and only thinks about office software is looking too narrowly.

Security Remains The Real Base Layer

What occupies me most in this debate is not just the model race itself, but what all of this means for security. A short look at Maslow’s hierarchy helps here. Security sits very low in that model, usually on the second-lowest level. That is the key point: security is not a luxury feature you add later. It is a precondition for everything above it to function in a stable way. That is true for people and just as true for digital infrastructure.

In my day-to-day work, security never means perfect security. It means this:

  • understanding risks
  • reducing attack surface
  • planning for failure modes
  • limiting damage
  • keeping people operational

You aim for the highest level of security you can realistically get while knowing that nothing is ever truly secure. That is why 2026 becomes so uncomfortable. The speed of change is rising faster than our ability to keep up. We do not have enough strong developers, we do not have enough strong security people, and we definitely do not have enough capacity to review the ever-growing mountain of code that is now being produced not only by humans, but also by models. The internet was never fully secure. What is new is the speed at which insecurity now scales.

The Internet Feels Less Safe Because Exploit Speed Is Rising

I do not think 2026 suddenly became less secure because people forgot how to write software. I think the mix of more software, more automation, more dependencies, more supply chain exposure, and better models is making visible how fragile all of this always was.

When Anthropic pulls bugs out of Mythos testing that sat undetected in systems for 16 or 27 years, that is not an exotic side story. It is a reminder that digital systems are full of historical debt, implicit assumptions, and old layers nobody really understands anymore. That is why one old operational rule still matters to me:

I would rather use a few good tools than many average ones.

Every additional tool brings:

  • new tokens
  • new secrets
  • new browser sessions
  • new libraries
  • new plugins
  • new update chains
  • new permissions

and with them, new ways for things to go wrong.

In a world where models can read, combine, prioritize, and sometimes exploit faster than before, stack minimization matters again. Not because minimalism sounds elegant, but because complexity creates very real security costs.

The Bigger Problem Is A Truth Problem

What currently worries me almost more than classic vulnerabilities is the epistemic problem: what is still real? A picture used to be at least a somewhat useful piece of evidence. A video even more so.

Today we live in a world where:

  • images can be generated synthetically in minutes
  • voices can be cloned convincingly
  • videos can be faked persuasively
  • thousands of SEO subpages can be generated automatically
  • entire opinion spaces can be filled artificially

For security, that is a fundamental shift. Security no longer means only:

  • Is my endpoint clean?
  • Is my password strong?
  • Is my network segmented?

It also means:

  • Can I still trust the source?
  • Can I recognize manipulated evidence?
  • Can I make decisions based on real signals?
  • Which channels remain trustworthy during an incident?

That is no longer an academic problem. If companies rely more heavily on AI agents, automated communication, and synthetic content, the center of the problem shifts from “protect systems” to “protect systems, identities, decisions, and perceptions of reality.”

Trust Itself Is Becoming An Attack Surface

Another point that still gets too little attention is that trust is now bigger than cryptography and clean network rules. It has become a platform question, a vendor question, and to some extent a state question.

Meredith Whittaker of Signal warned about exactly this in Bloomberg in January 2026. Her point was that AI agents are “pretty perilous” for secure apps because they need deep permissions, broad data access, and often system-wide visibility into content to do their job. That is why the issue matters so much from a security perspective. Even if encryption remains mathematically sound, it loses practical value when the operating system, the agent, or the surrounding platform can already see everything in plain text.

From a security angle, an agent is therefore not simply “a chatbot that clicks a little.” It is more like a new programmable employee inside the company. It reads mail, sees documents, opens browser sessions, calls APIs, knows calendars, uses tokens, starts workflows, and may even write code or tickets. If that agent is compromised, delegated badly, or simply given too many permissions, the attacker is no longer outside the door. The attacker is already inside the process.

Then there is the political layer. When Apple had to withdraw stronger iCloud encryption in the United Kingdom after pressure around backdoors, that was not just a privacy story. It was a reminder that security promises always depend on power relationships. And when Apple simultaneously relies on Google Gemini for Siri, the picture becomes even clearer: trust now depends on chains of dependencies, not only on a product name.

Geopolitics Is Infrastructure Again

The harshest point right now is how directly geopolitical events are affecting technical reality again. Helium is a good example.

On March 12, 2026, Tom’s Hardware reported that helium production at the Ras Laffan complex in Qatar had been disrupted after Iranian drone attacks. According to that report, the site went offline on March 2, temporarily removing around 30 percent of global helium supply from the market. That shows how thin the thread has become between war, chemistry, semiconductor manufacturing, and AI infrastructure.

The short version is brutal:

drone strike -> helium shortage -> pressure on chip production -> less room for AI hardware -> more stress in an already overheated compute world

When critical process gases become scarce, we are not talking about an abstract macro story. We are talking about real bottlenecks in a world that needs more and more chips at the same time. This is not about reducing the story to one ASML machine “needing helium.” It is about the whole chain: lithography, cooling, process environments, fabs, packaging, export controls, electricity, and datacenters. AI may feel digital, but it depends on very physical things.

That is one of the strongest reasons for me to stop treating AI as an app story or a model story. AI is now:

  • energy policy
  • supply chains
  • chip manufacturing
  • cloud capacity
  • foreign policy
  • industrial strategy

China Is Building Sovereignty, Not Just Models

Many people in the West read the DeepSeek moment mainly as a market and media event. I think the deeper layer matters more.

Reuters reported in late February 2026 that DeepSeek had not shown its upcoming V4 model to US chipmakers for optimization, but had instead shared it early with domestic partners such as Huawei. At the same time, reports keep appearing that Huawei is seriously trying to build a domestic stack below the model layer through new Ascend and Atlas systems. Whether every performance claim ultimately holds up is almost secondary. The direction is clear: China does not just want models. China wants its own stack.

That is what makes the next few years so interesting. The real battle is not between chatbots. It is between infrastructure blocs.

Europe Risks Becoming More Spectator Than Architect

Europe still has strong research, strong universities, strong industry, and a reasonable regulatory tradition. But if I am honest, Europe often feels a bit like the Apple of continents right now:

  • strong in aspiration
  • strong in design, ethics, and rules
  • weaker in models, chips, and platform power

That is intentionally a bit sharp, but I think the direction is real. While the US is pushing models, cloud, chips, and capital, and China is pushing sovereignty and domestic stacks, Europe risks spending its time commenting, regulating, and then consuming products built elsewhere. From a security point of view, that is not trivial, because dependency is always a security issue.

Between Hype And Fear There Is Usually An Interest

The media landscape itself has become part of the system. In the United States, AI is often sold in a near-religious tone about the future. That is not surprising. Many of the biggest beneficiaries of this wave sit there: model vendors, cloud platforms, chip designers, venture capital, public markets, defense buyers, and enterprise customers.

In Europe, the debate often sounds different: more fear of job loss, more concern about regulation, more warnings about dependency and loss of pace. That is not surprising either, because Europe has less upside and more dependence in many of the underlying platform layers.

I think both reflexes become dangerous when they turn simplistic. Naive hype ignores damage, concentration of power, and security risk. Pure fear ignores tools, productivity, and the chance to build systems that never got enough time before. That is why I keep coming back to the hard middle path: stay interested without becoming naive, name risks without freezing, and ask yourself with every big headline who is telling this story and what interest sits behind it.

So Is The World Broken?

My answer is no. But it is moving violently. That sense of everything happening at once feels like overload for many people.

We are experiencing all of this at the same time:

  • AI hype
  • real model progress
  • overheated capital
  • security problems
  • geopolitics
  • declining trust in media content
  • advances in robotics
  • a public that increasingly struggles to separate substance from show

That is a lot, especially if you are also trying to manage a normal life, a job, a family, a company, or responsibility for infrastructure. Still, I think it is dangerous to read this period only as a collapse story, because that hides the actual task. The question is not how to stop change. The question is this:

  • Which systems do I actually want to trust?
  • Which dependencies should I reduce?
  • Which tools do I really need?
  • Which security foundations need to become harder?
  • How do I remain capable of making decisions in a world where speed and deception rise together?

What Companies Need To Harden Right Now

When I reduce all of this to practical work, the answer becomes surprisingly unglamorous. It is not about throwing five new AI tools into every team and hoping that the future appears by magic. It is about building a few things much more deliberately than before.

  • Do not run AI agents under normal user accounts. Give them separate identities, tight scopes, short-lived tokens, and clear approval boundaries for mail, code deployment, admin changes, and payments.
  • Simplify tool landscapes instead of adding another SaaS product every week. Every new AI app brings more sessions, browser extensions, plugins, secrets, logs, and vendor dependency.
  • Log agents properly, not just users. What matters are tool calls, file access, approvals, outbound connections, changes to tickets or code, and the transition from prompt to action.
  • Build a verification playbook for synthetic content. Payment approvals, HR instructions, admin requests, and incident communication should be confirmed through a second channel.
  • Treat patching and exposure management as much closer to real time, especially for browsers, identity, VPN, firewalls, collaboration tools, and internet-facing services.
  • Practice recovery as if a model, a connector, or a vendor could fail tomorrow. That means export paths, fallback communication channels, and a clean kill switch for agents.
  • Question critical suppliers much harder. Where do data, logs, prompts, memory, keys, training opt-outs, and forensic options actually sit?

That sounds much less glamorous than Mythos, Gemini, ChatGPT, or Grok. But that is exactly where it will be decided whether a company uses the next wave as a tool or gets buried under its own complexity. The world is not simply ending. It is becoming harder, denser, and faster. Anyone who strengthens the foundation now will still have the freedom to benefit from AI later.

Until next time,
Joe

Sources and Further Reading

© 2026 trueNetLab