<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1331609480336863&amp;ev=PageView&amp;noscript=1">
Trust, Data, and Human Judgment - Former NASA Astronaut Andrew Feustel

January 14 2026 | Thought Leadership

Trust, Data, and Human Judgment - Former NASA Astronaut Andrew Feustel

When you’re orbiting Earth at 28,000 kilometres per hour, data is not an abstract concept. It’s survival.

For Andrew Feustel, former NASA astronaut and International Space Station commander, every decision made in space depended on data; telemetry, sensors, simulations, and models. But as he explains in a recent episode of Data & AI Mastery, data alone is never enough. The real challenge lies in knowing when to trust it and when not to.

In conversation with Dr. Raoul-Gabriel Urma, Andrew shares what decades of spaceflight can teach leaders navigating AI, automation, and decision-making here on Earth.

When Data Doesn’t Match Reality

One of Andrew’s most striking insights is deceptively simple: data never perfectly reflects reality.

In space, systems generate enormous volumes of telemetry, but sensors can fail, models can drift, and unexpected conditions emerge. Astronauts are trained to expect discrepancies, to recognise when the numbers don’t quite line up with what they’re seeing or feeling.

This has profound implications for organisations relying heavily on data and AI. Models are built on historical patterns, but the real world is messy, dynamic, and full of edge cases. Blind faith in dashboards or predictions can be dangerous, especially when stakes are high.

As Andrew puts it, astronauts are taught to constantly ask: Does this data make sense in context?

Why Humans Stay in the Loop

Despite the sophistication of space systems, astronauts remain deeply involved in decision-making. Automation supports them, but it doesn’t replace human judgment.

Andrew recalls moments from space missions, and from historic events like Apollo 11,  where humans had to override automated systems because something felt wrong. Those instincts weren’t guesswork; they were the product of years of training, simulations, and exposure to failure scenarios.

The lesson for AI leaders is clear: human-in-the-loop systems are not a weakness, they are a safeguard.

In environments where errors are costly or irreversible, autonomy must be balanced with accountability. AI can process faster, spot patterns at scale, and reduce cognitive load, but humans provide context, ethics, and situational awareness.

Trust Is Built Through Exposure, Not Explanation

A recurring theme in the conversation is trust.

Andrew explains that astronauts don’t trust systems because someone tells them to, they trust them because they’ve trained with them repeatedly. Every system has been tested in simulations, stress scenarios, and failure modes long before it’s used in orbit.

This mirrors what many organisations struggle with in AI adoption. Trust isn’t built through documentation or policy alone. It’s built through hands-on experience, transparency, and understanding how systems behave when things go wrong.

For leaders, this means creating safe environments for experimentation, where teams can explore AI tools, understand their limits, and learn when to rely on them.

AI’s Role in Space — and Why It’s Growing

While astronauts still rely heavily on human judgment, Andrew is clear that AI already plays a critical role in space exploration.

Modern telescopes generate more data than humans could ever analyse alone. AI systems help identify anomalies, classify signals, and accelerate scientific discovery. Companies like SpaceX now use automated decision systems to land rockets with precision that would have been unimaginable decades ago.

Yet even here, the principle remains the same: automation supports exploration, it doesn’t replace responsibility.

As AI becomes more capable, the question isn’t whether to use it, but how to design systems where humans and machines complement each other.

Training as the Ultimate Competitive Advantage

Perhaps the most transferable lesson from NASA is its commitment to preparation.

Astronauts train for years for missions that last days or weeks. They rehearse scenarios that may never happen, because when things go wrong, there is no time to learn on the job.

This long-term investment in people contrasts sharply with how many organisations approach technology adoption. Tools are deployed quickly, but training, culture, and capability development lag behind.

Andrew’s experience highlights a powerful truth: mastery comes from preparation, not reaction.

For organisations adopting AI, this means investing not just in models and platforms, but in education, simulation, and continuous learning, especially for leaders making high-impact decisions.

From Space to the Boardroom

Although few leaders will ever perform a spacewalk, many now face decisions that are similarly complex, high-stakes, and irreversible.

Whether it’s deploying AI in healthcare, finance, infrastructure, or defence, the same principles apply:

  • Data must be interpreted, not blindly followed
  • Automation must be trusted, but also challenged
  • Humans must remain accountable for outcomes

Andrew’s perspective reminds us that technology does not remove responsibility, it amplifies it.

Final Reflection

Looking down at Earth from space gives you a powerful sense of perspective. For Andrew Feustel, that perspective extends far beyond orbit, into how we think about technology, trust, and leadership.

As organisations rush to adopt AI, the most enduring lesson from space may be this: progress depends not on removing humans from the loop, but on preparing them to lead within it.

At Cambridge Spark, we help leaders and teams build the data and AI fluency needed to make confident, responsible decisions in complex environments, where trust, judgment, and context matter as much as technology.

👉 Explore how we support AI leadership and workforce transformation.

 

Upskill your workforce

Upskill your workforce and accelerate your data transformation with expert technical programmes designed to create impact.

Contact us