How AI Changes the Expert

A Future with Expert-Powered AI
— A Story by FwdFaster
We stand at the threshold of a significant shift that feels both sudden and inevitable, a moment where the tools we have built begin to mirror the minds that built them. Throughout history, every leap in technology has promised to redefine the human element, yet every era has proven that progress is hollow without the steady hand of the expert. Knowledge remains our most sacred currency, and as it becomes more abundant through silicon and code, the human wisdom required to wield it becomes more vital than ever before.
This is not a manifesto on automation. It is not an epitaph for the expert. This is an exploration of a new kind of mastery. One where your judgment is the "traceable signature" in every breakthrough. Technology has always presented us with the risk of being outpaced; yet, that risk has always been met by the human desire to master the machine. We are not losing our place; we are refining it to move forward faster.
We are FwdFaster, a European science venture, and we have started redefining the relationship between the expert and AI in search of major breakthroughs and advances.
(1/7)

System of Impact

The Societal Impact Experts can have by using AI

The deployment and mastery of AI constitutes a transformative shift with the potential to reshape society in profound and lasting ways. As also recognised by leading international institutions such as the European Commission (EC) and European Parliament (EP), the World Health Organization (WHO), United Nations (UN),  World Economic Forum (WEF), and the Organisation for Economic Co-operation and Development (OECD), AI is a powerful general-purpose technology capable of finally addressing complex societal challenges. These range from modernising education and healthcare systems to advancing scientific research and accelerating climate action. It can be used by seasoned experts in critical moments and for important decisions that affect lives and societies. Such experts can thrive in an AI-saturated environment by offloading automated work to an AI that serves as a well-trusted Super Assistant, allowing them to focus on their unique human value and become more of an expert than they have been in years, fully focused on their patients or working on the next major breakthrough that is needed to bring society further.

At the same time, AI systems introduce significant risks and complexities, particularly concerning the protection of fundamental rights such as privacy, safety, intellectual property, security, and human autonomy. In response, the European Union’s human-centric approach underscores the urgent need for robust governance frameworks to ensure that AI is developed and deployed in ways that are trustworthy. Such frameworks must be proportionate, risk-based, and scientifically sound and validated, combining clear regulatory safeguards with policies that promote innovation and fair competition, especially in high-risk and sensitive domains such as healthcare, public administration, education, and to be used for peace, dignity, and equality. 

Over recent decades, the rapid expansion of data availability, advances in computational power, and improvements in human–machine interaction have accelerated AI’s integration across society. AI is now embedded in both everyday life and professional environments, representing not a gradual evolution but a structural transformation in how knowledge is created, managed, and applied. Increasingly, these changes are being framed as part of the Fifth Industrial Revolution, a phase that demands deliberate guidance and structured governance to ensure safe, ethical, and socially beneficial adoption to support the expert. Emerging from the efficiency - and automation-driven priorities of Industry 4.0, this new paradigm recognises that scale and optimization alone are insufficient to address today’s social, environmental, and ethical challenges. Instead, it places humans and expert judgment back at the centre of innovation, emphasizing collaboration between people and intelligent systems rather than substitution. This paradigm shift calls for coordinated action at the European level to ensure that technological progress remains aligned with societal and political values and the public interest among its Member States and beyond.

At the same time, advanced AI, robotics, and data systems function as augmentative tools, first supporting creativity, contextual understanding, and decision-making capabilities that automation alone cannot replicate. This will be followed by breakthroughs in many areas and at an unimaginable pace. This evolution also reflects a broader realignment of economic and governance priorities, integrating sustainability, resilience, inclusivity, and ethical responsibility into the entire ecosystem. Progress must increasingly be measured not only by efficiency but also by long-term societal and planetary well-being, which is not automatically on the political and organizational agendas.

Yet alongside AI’s extraordinary potential lies a subtler challenge. While AI systems can sometimes produce results that are remarkably accurate and insightful, far more often their outputs are “almost right”-plausible, coherent, but marked by an imperceptible flaw. This subtle dissonance of finding the correct and effective AI implementation to master it successfully is the critical gap, which the seasoned expert, shaped by years of practice and contextual understanding, may still recognize instantly. The critical question is what happens as this gap becomes harder to perceive and the role of experts becomes even more debatable. When approximations grow increasingly convincing, the difference between the authentic and the automated risks fades from view and perception. In low-stakes situations, this may be acceptable, but in situations involving safety, ethics, accountability, or individual lives, such distinctions matter deeply. These high-stakes contexts define the tension between human judgment and automated systems, a space in which society must learn to operate carefully, ensuring that technological power remains guided by human values, responsibility, and trust.
(2/7)

From Weak to Superintelligence

Nowadays, when you open an app, tool, or website, you are greeted with sparkles. It sometimes seems like these four-point stars, a common symbol of AI, have become a must-have feature for every application. While these AI extensions are often useful, they vary in their smartness and effectiveness (in some cases, they feel more like marketing gimmicks). As AI suddenly seems to be everywhere, it raises questions about the actual intelligence of “AI”: How intelligent is it nowadays, and what does the future hold for AI's intelligence? Before we deep-dive into these topics (we could talk about this for hours, but we will keep it brief for you), we first need to understand the field of AI, including its history and progress.

The current rapid advancements in AI are the result of decades of development in techniques, hardware, and tools that have converged to accelerate AI technology. Since the establishment of the scientific field of AI in the 1950s, pioneers like Alan Turing, who questioned whether machines could "think," and John McCarthy, who coined the term "Artificial Intelligence," have laid the groundwork for the field. Over the years, AI, as we know it today, has been shaped by significant breakthroughs, such as Marvin Minsky’s work on neural networks, which established early foundations, and Geoffrey Hinton’s (many know him as “Godfather of AI”) ongoing advocacy for deep learning, which ultimately transformed the landscape of AI.

As the field continues to advance, systems are becoming increasingly capable, reliable, or “intelligent.” The strength of this intelligence is typically categorized into three types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). ANI and AGI are also referred to as weak AI and strong AI, respectively.

Here’s a breakdown of the three levels of intelligence and capability:
Artificial Narrow Intelligence: This type of AI is specialized for specific tasks, such as personalizing medical treatments, synthesizing academic and regulatory literature, traffic management, weather forecasting, and simulating biological processes. It performs its designated tasks with great precision and often outperforms humans and experts in specific tasks and areas. Well-known examples include AI systems that beat humans in board games like chess and Go.

Artificial General Intelligence: This refers to a type of artificial intelligence that can understand, reason, learn, and adapt across a wide range of intellectual tasks and domains, much like human cognitive abilities. Experience, ethics, and morality will be important aspects of this type of intelligence.
Artificial Superintelligence: This form of AI is so vastly superior to human intelligence that its development could lead to a society where diseases no longer exist, work is unnecessary, and happiness is widespread. However, it also poses an existential threat to humanity.
Various leading AI marketers (predominantly from the USA) suggest that AGI is just around the corner. They base themselves on (their) popular, rapidly developing, highly advanced “reasoning” generative AIs such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. While they increasingly respond more like humans, they do not yet reach the level of AGI as we described above. Nearly all AI scientists agree that significant breakthroughs, new methodologies, and substantial resources are still needed, and they do not expect AGI to emerge in the near future. Some even think it will take a long time, or that it will never be achieved, due to the limitations and challenges the current AI systems still face.

Currently, all AI solutions are narrow in focus and will likely remain so for a while. Nevertheless, development is progressing at a pace we have never seen before, with increasingly capable, advanced, yet also risky, AI solutions entering our lives. 
V. Bunakov, 1985, Better to be active today than radioactive tomorrow
T. Trepkowski, 1952, Nie image
T. Trepkowski, 1952, Nie
L Schneider, 1967, War is Not Healthy for Children and Other Living Things
Unknown author, 1977, Soviet Union is a key to peace and friendship image
Unknown author, 1977, Soviet Union is a key to peace and friendship
H Hijikata, 1968, No More Hiroshimas image
H Hijikata, 1968, No More Hiroshimas
P Popov, V Marchenko, N Klimov, 1970s, Peaceful sky — to you, to us, to everyone! image
P Popov, V Marchenko, N Klimov, 1970s, Peaceful sky — to you, to us, to everyone!
J Lennon and Y Ono, 1969, war is over! if you want it image
J Lennon and Y Ono, 1969, war is over! if you want it
Avant garde magazine, 1967, No more war
R. Sawyer, 1979, Ban the neutron bomb image
R. Sawyer, 1979, Ban the neutron bomb
P Debenham, 1984, No. No nukes. No tests image
P Debenham, 1984, No. No nukes. No tests
Unknown author, 1970, Day of continental support for Vietnam, Cambodia, and Laos (Cuban anti-Vietnam War poster) image
Unknown author, 1970, Day of continental support for Vietnam, Cambodia, and Laos (Cuban anti-Vietnam War poster)
A Lovely, 2010s, Hold Hands image
A Lovely, 2010s, Hold Hands
(3/7)

Experts of every stripe

From the Brain to the Field

Expertise comes and is developed in various forms and flavors. Regardless of whether you work as a surgeon, a medical guideline developer, or an information specialist, there are commonalities in the implementation of your expertise. Some experts rely more on cognitive skills, while others draw primarily from their bodily experiences. Some experts excel in quick responses, whereas others are deep-divers in their specific niche field. There are experts focused on compliance and those seeking major breakthroughs. This diversity shows that experts have many different shapes in the real world.

An expert is usually a person that received extensive training or experience in a specific field and therefore has great knowledge and experience with this. An expert is typically defined as a person who has dedicated substantial time and effort to receiving extensive formal training, often through academic institutions or specialized programs, complemented by considerable real-world experience in a specific domain. Consequently, this individual possesses a profound depth of knowledge, sophisticated technical skills, and a practical understanding of the nuances and complexities within their field. This combination of deep theoretical knowledge and practical experience allows the expert to solve difficult problems, make informed judgments, and offer authoritative advice that non-experts cannot. Their expertise is generally recognized and valued by peers and organizations within that field.

While all experts use their brains and skills, the "Cognitive-Only" expert focuses on manipulating existing information, whereas the "Physical/Field" expert focuses on acquiring new data or directly applying skills in a physical environment. Cognitive-Only experts operate primarily in the realm of "Abstract Processing." Their value lies in their ability to synthesize data, predict trends, and navigate complex systems from a central office or digital environment. While the physical and field experts' expertise is inseparable from their physical presence. For some experts this is on a scale.

The other important difference between experts is this of the maintainer versus the disrupter. Maintainers are experts who provide the structural integrity of civilization. Their expertise is characterized by high-stakes reliability, consistency, and the management of existing complexity. If they do their job perfectly, you don't notice them; if they fail, the system collapses. The disrupter, on the other hand, operates at the edges of the known. Their expertise is characterized by the discovery of new "rules," the creation of new tools, (biological) compositions, drugs, or the dismantling of old paradigms. They often make the current "system" obsolete and change the world. You can imagine that the impact of AI on the work of the expert will be different for the different profiles. The cognitive-only, maintainer expert will see their work change and at a different pace than the Physical/Field-based disrupter expert.

Before we continue to the next chapter, decide for your own expert job whether what your profile is:

Cognitive-only maintainers

Cognitive-only maintainers are responsible for keeping complex systems understandable, lawful, and aligned with (EU-shared) standards. Their expertise lives and is implemented in documents, models, guidelines, decisions, and institutional memory. Typical examples include medical guideline developers and evidence reviewers and evaluators,  policy officers translating legislation into operational rules and supporting companies, business and organizations, compliance specialists, legal and financial auditors, and risk managers, cybersecurity analysts monitoring abstract threat landscapes, and data governance and privacy professionals. Their work is often invisible. When done well, nothing breaks - when they do not perform their work perfectly, the consequences are quite overwhelming. A regulation is applied (in)consistently. A guideline produces (un)safe outcomes. A system remains (un)compliant across thousands of edge cases. AI enters this space as a powerful assistant and support tool, making the life of the expert easier and more effective. It can summarize vast regulatory texts, compare versions of guidelines, flag inconsistencies, draft reports, monitor dashboards, and cross-reference evidence at scale. Tasks that once took weeks can be reduced to hours. But this is also where the danger of automation is most subtle and less directly visible, nonetheless impacting their lives significantly. 

Cognitive-only distruptor

Cognitive-Only Disrupters are responsible for challenging and considering how problems are framed, how knowledge is organized and developed, and innovations are created. Their expertise lives in concepts, models, narratives, and alternative ways of seeing systems. Typical examples include researchers proposing new theoretical frameworks, economists redefining how value or risk is measured, strategists and policy makers rethinking governance, education, or markets, designers of new digital or policy paradigms, and ethicists and philosophers shaping emerging norms. Their work is often visible only in hindsight. When done well, it changes how everyone else thinks. Old questions lose relevance. New categories appear. Entire fields reorganize around a different understanding of what matters.

Field/Lab-Based maintainers

Field/Lab-Based Maintainers are responsible for keeping physical systems functioning safely and reliably in the real world. Their expertise lives in hands-on practice, operational protocols, situational awareness, and experience accumulated through direct interaction with complex environments. Typical examples include clinicians applying medical guidelines at the bedside, engineers maintaining critical infrastructure, laboratory technicians ensuring experimental validity, safety inspectors, pilots, operators, and control-room professionals overseeing industrial or energy systems. Their work is often noticed only when something goes wrong. When done well, systems run quietly and continuously. Patients receive safe care. Infrastructure holds. Processes remain stable despite variability, wear, and uncertainty. AI enters this space as a powerful operational aid. It can monitor sensor data continuously, detect anomalies, support diagnostics, optimize maintenance schedules, and assist with real-time decision-making. Tasks that once relied on intermittent human attention can now be supported by constant machine vigilance. But physical reality rarely conforms perfectly to models, especially because human beings are involved.

Field/Lab-Based Disrupters

Field/Lab-Based Disrupters are responsible for expanding what is materially possible. Their expertise lives at the intersection of experimentation, discovery, and interaction with the physical world. Typical examples include biomedical researchers developing new therapies, materials scientists inventing novel compounds, climate and energy innovators, synthetic biologists, and engineers creating entirely new physical capabilities. Their work unfolds at the edge of uncertainty. Progress is nonlinear, failure is common, and consequences often extend far beyond the laboratory. AI enters this space as a dramatic accelerator. It can simulate complex systems, predict molecular or material behavior, explore vast design spaces, and optimize experimental pathways. Entire lines of inquiry that once took years can now be explored in months. But discovery is not only a technical exercise.
(4/7)

Experts

vs
Artificial
Intelligence

In 6 key examples

1

AI excels in rapidly analysing massive amounts of big data sets to uncover hidden patterns humans can’t detect. Building on this, it enables predictive forecasting, for example by developing digital twins and learning signals to anticipate trends like weather, traffic, or disease treatments and support proactive decision-making to enhance better climate mitigation strategies, better mobility managements, and preventive and personalized medicine.

2

AI transforms complex (un)standardized data and knowledge into actionable insight by synthesizing signals across time and domains. It supports foresight and innovation by simulating scenarios, testing alternatives, and generating new hypotheses, helping decision-makers act earlier, design smarter systems, and deliver more efficient, adaptive solutions across areas like climate, mobility, and healthcare.

3

AI excels at generating ideas from knowledge and historical and new information, facilitating the development of ideas, potential leads, hypotheses, and additional insights, as well as alternative perspectives, structures, or scenarios, reducing time and costs (i.e., energy, money) while doing it.

4

Expert bring accountability, empathy, and ethical judgment to decision-making, ensuring that outcomes are socially responsible and morally sound rather than merely statistically probable. At the same time, they translate human intelligence into the physical world by uniting reasoning, sensory awareness, and hands-on interaction, enabling meaningful exploration, experimentation, and learning through real-world experience, including trial and error.

5

Experts excel at interpreting nuance and coherence, intuitively recognizing when results no longer align with real-world dynamics. By applying institutional memory and critical reasoning, they enrich data-driven insights with human judgment, balance, and contextual understanding.

6

Experts use their physical presence and appearance, eye contact, posture, tone of voice, and visible composure, to build trust, convey responsibility, and de-escalate high-stakes situations in ways AI cannot replicate. These embodied signals allow others to assess intent, credibility, and accountability in real time, providing social and ethical grounding that goes beyond analytical correctness. In such moments, authority and trust arise not from statistical confidence, but from a visibly human commitment to the decision and its consequences.
(5/7)

Intelligent Automization

or Amplification?

Freeing the Mind of the Expert

Before we look more closely at the bond between experts and AI, it helps to examine how AI actually appears on the expert's desk;  in the lab, in the hospital, and in the expert's fieldwork. While we have so far viewed AI as a broad but narrow “thing,” it manifests in our professional lives in two practical ways: (1) as a tool for managing burdensome, repetitive, and costly tasks, or (2) as a foundation and starting point for amplification.

The first category of AI applications, the automization category, often appears when experts find themselves working on tasks that are burdensome, could be improved, and entail high costs in time, energy and money. Have you not ever had the thought during a long day behind a computer, "Can AI not do this for me?" Many of us have had these thoughts, and therefore there are countless examples, from conducting document and literature reviews to automatically recording and transcribing patient interactions. In recent years, many tasks that could be automated using rule-based and logic-driven methods have already been automated. More recently, AI has enabled the automization of tasks that do not rely solely on rule-based or logic-driven approaches. In particular, expert workflows dealing with semi-structured and unstructured data, such as documents, audio, images, and videos, can be made significantly more efficient and produce higher quality work (but read this story further to see which new skill you need for this).

The second, more imaginative category of AI application, the amplification category, aims to solve problems that are currently beyond human experts or require significant, expensive, and scarce resources, such as specialized experts, lab capacity, patients, or rare materials. These applications often capture the public and even experts' imagination, hinting at notions of real intelligence, though they remain examples of narrow AI. Although they are rare, these applications are increasingly appearing across scientific and technological fields. The applications are real game-changers for the field, changing the work of experts and amplifying progress.
A well-known AI tool in this category is Google DeepMind’s AlphaFold. You most likely heard about this tool if you are an expert in drug development. This AI accurately predicts a protein's 3D structure from its amino acid sequence. By largely solving the long-standing "protein folding problem", AlphaFold's impact on drug discovery and disease understanding is enormous, opening many new opportunities for experts and the public. The first drugs discovered with AlphaFold are expected within the next years, after thorough clinical trials are successfully completed. Similar breakthrough AI applications are emerging in fields such as materials discovery, weather forecasting, and space exploration.

You might recognize that AI used for automization in expert tasks and AI for scientific breakthroughs are connected. This connection is evident through various historical innovations and new technologies. Usually, a major breakthrough is needed to trigger a new wave of automation and progress. The emergence of AI solutions today will likely influence the future of automization, as AlphaFold has and will continue to do in drug development. We expect that recent foundational advances in AI, and importantly, a better understanding of AI, will lead to more frequent "amplifications." These will drive changes in both automation and innovative research. Ideally, experts freed from repetitive tasks will stand at the forefront of the next breakthrough, and automization is crucial to achieving it.



Image - FAM151A tertiary structure as predicted by AlphaFold2 (CC BY,  EMBL-EBI)
(6/7)

A new Expert Skill: Supervising AI

Expert for the Oversight

As we mentioned before, experts will face and develop many new AI implementations, both for automation purposes and to amplify progress in the coming years. Since AI solutions are far from perfect and many experts work in environments where decisions have a significant impact on human life, it is crucial to maintain control and oversight over these AI applications and their outcomes. We will introduce you to a skill that should no longer be missing from any expert's toolbox: the ability to supervise AI.

Effective supervision of modern AI systems generally relies on experts, organizations, and technology. To keep these systems safe and reliable, we need Expert Oversight, where experts stay in control to make final ethical decisions. Additionally, we need Scalable Oversight, as humans and experts can't handle all AI output alone. Using “helper” AI systems (e.g., LLM-as-a-Judge), we can effectively supervise AI systems at scale. Since supervising AI is ineffective when limited to just the output (it's quite complex nowadays), it is essential to look under the hood of the system. Next, the Interpretability of the AI and its logic are crucial for AI supervision. Lastly, Governance, often an overlooked aspect of AI oversight, is key to successful supervision, as it sets the legal and, importantly, ethical foundation. In this article, we focus solely on expert oversight, which is most relevant to experts on a daily basis.

Maintaining expert oversight of AI systems and applications may seem routine for many AI users today, especially with generative AI models such as ChatGPT, Gemini, and Claude. Typically, you create content with these AI, assess it, review it, and modify it. However, oversight now involves a more structured approach. Typically, this is done through two main operational models that clarify where accountability and decision-making authority are assigned: Human-in-the-Loop (HITL) and Human-on-the-Loop (HOTL). In HITL, the expert must manually intervene and approve every specific action or decision before it is finalized, whereas in HOTL, the AI operates autonomously under the expert’s supervision, with the human only intervening to override or veto the system if it deviates from expected norms.

Many good examples of HITL are found in medical diagnostic settings. Imagine a HITL scenario in which a radiologist uses an AI tool to detect early-stage lung nodules in CT scans. The AI scans the images, highlights potential areas of interest, and displays them on the radiologist's screen. The radiologist approves or rejects the findings, after which the patient can proceed with appropriate medical treatment. In this case, the expert is always the active gatekeeper and responsible for the final decisions. The downside of this method is that the process can never be faster than a human’s review time. 

Nice HOTL examples where AI operates autonomously under the expert’s supervision are found in quality assurance. Imagine Quality Assurance (QA) employee is supervising an automated sorting line for pre-packaged salads. The AI uses high-speed cameras to rapidly scan thousands of leaves and automatically remove leaves with pebbles or insects. Unlike the radiologist in the previous example, the Quality Assurance employee does not approve every individual action; that’s impossible give the amount of leaves. Instead, a dashboard is monitored, and when a sudden, irrational spike or drift of removals is observed, the QA employee stops the process immediately and starts investigating the AI system and processes. 

Experts, across all previously mentioned variations, when introduced to Human-in-the-Loop or Human-over-the-Loop systems, will notice changes in their work and the speed at which they operate. Previously, experts were mainly evaluated based on (the quality of) their output; now, they must assess the effectiveness and capability of the system using their own judgment and ethical principles. This happens in real time and therefore requires developing awareness of and resistance to automation bias, developing selective attention, and learning how to provide effective feedback to AI systems.
(7/7)

To the Future with AI-Powered Experts

How does the role of the expert change?

Across all four expert archetypes - cognitive and field-based, maintainers and disrupters - a comparable shift is visible, although in different forms. AI increasingly takes over their production of knowledge: summarizing, generating, optimizing, monitoring, and proposing at a speed and scale no human can match. What it does not take over is responsibility. As AI systems become more capable, the role of the human expert does not disappear - it fundamentally changes. The expert of the future is no longer primarily valued for recalling information, applying known rules, or generating first-order outputs. Instead, their value concentrates on judgment, accountability, ethical reasoning, supervising, and contextual interpretation. Experts become the final arbiters: the ones who understand not just what the system recommends, but how it can be improved, when it should be challenged, and what the consequences of effectively acting on it will be. Whether interpreting a guideline, curating disruptive ideas, intervening in a live system, or deciding which innovations should move forward because of their effectiveness, the expert increasingly serves as the human boundary around automated and capable intelligence.

For Cognitive-Only Maintainers, AI systems can reproduce the formal characteristics of expertise by generating outputs that are coherent, structured, and ostensibly authoritative, yet without assuming the responsibility that human experts bear and will continue to bear over the coming decades. Although such systems perform well in terms of consistency and scale, they frequently lack the epistemic depth required for reliable interpretation and effective implementation. In particular, they may fail to account for the rationale behind specific exceptions, the historical evolution of rules shaped by prior failures and iterative learning, the deliberate preservation of unresolved ambiguities, or the ethical considerations that have informed necessary compromises. Consequently, while AI may approximate the appearance of expertise, it does not fully capture the contextual, historical, and normative dimensions that underpin expert judgment and institutional trust.

For example, an AI may draft a medical guideline that is statistically sound but overlooks why clinicians historically deviated from that recommendation in specific populations with valid argumentation and reasoning. Or it may generate a compliance interpretation that is logically consistent but legally risky because it ignores precedent, intent, or enforcement culture within a specific country. For this kind of expert, the future is not about competing with AI on speed or scale. It is about becoming the final interpreter, the person who can say: “This looks correct, but considering the importance of the context, it is not correct, and we need to decide something else and continue our investigation for the correct answers.” Their value increasingly lies in judgment, accountability, effectiveness, capability, and institutional memory, things that cannot be inferred from data alone.

While for Cognitive-Only disrupters, AI enters this space not as a replacement, but as an amplifier of excellence. It can generate alternative framings, explore counterfactual scenarios, recombine ideas across disciplines, and surface patterns or analogies that would be difficult for any single human to produce alone. It dramatically lowers the cost of exploration and makes it much more effective. Lines of thought that once required months of work can now be sketched, expanded, and compared in hours, mastering large amounts of information in an instant. Here too, the risks are subtle but significant. AI systems are capable of generating novelty at scale, producing ideas that are original, coherent, and intellectually compelling, yet often without the discernment required to evaluate their broader implications. Such outputs may fail to account for whether a society or institution is prepared for a conceptual shift, whether a new framing undermines trust or institutional legitimacy, whether seemingly elegant solutions introduce ethical risks that only emerge in practice, or whether apparent novelty reflects genuine insight rather than recombination of existing patterns. As a result, AI-driven innovations may appear persuasive, while lacking the evaluative judgment of experts that is necessary for responsible adoption and long-term societal impact.

For example, an AI machine may propose or generate an innovative economic framework that optimizes efficiency while eroding social cohesion or fairness. The ideas look convincing, even visionary, but lack an anchor in lived consequences. For this kind of expert, the future is not about generating more ideas faster. It is about becoming the curator and boundary-setter, the person who decides which ideas deserve serious attention, which require careful testing, and which should be rejected despite their intellectual appeal. Their value increasingly lies in judgment, cultural awareness, and ethical responsibility: qualities that cannot be derived from pattern synthesis alone. AI expands the space of what is imaginable. Humans decide which imaginations are worth turning into reality.

For Field- or Lab-Based Maintainers, AI systems operate primarily on learned patterns and predefined thresholds, performing exceptionally well under expected and stable conditions. However, their effectiveness diminishes in situations that deviate from trained data, where early warning signals, rare but high-impact edge cases, or context-specific trade-offs between safety, speed, and outcome become critical. In such moments, ethical considerations that arise under time pressure or uncertainty may not be adequately captured by automated systems, underscoring the continued need for human judgment in maintaining safety and reliability in real-world settings. 

A clinical decision-support system may recommend a protocol-compliant treatment that conflicts with a patient’s unique circumstances. An automated monitoring system may report normal operation while an experienced operator senses instability based on subtle cues. In these moments, the cost of error is not abstract; it is immediate and physical. For this kind of expert, the future is not about deferring judgment to machines. It is about becoming the responsible overseer, the person who knows when to trust automation and when to intervene, or use the automation in a different manner. Their value increasingly lies in situational judgment, accountability, and the ability to act decisively when models fail and are not capable, qualities that cannot be fully encoded or automated.

Finally, for Field- or Lab-Based Disrupters, AI systems can propose potential solutions or avenues for experimentation, but they lack the capacity to judge whether a prospective breakthrough should be pursued, whether it is ethically acceptable, socially beneficial, or likely to generate meaningful impact. Such systems are ill-equipped to assess how risks and benefits are distributed, particularly with respect to vulnerable social or cultural groups, or to anticipate long-term consequences, including negative externalities that may only become apparent over time. Moreover, AI cannot determine when caution or restraint constitutes the responsible course of action, especially in light of local, national, or European regulatory frameworks and strategic priorities, leaving these judgments firmly within the domain of human expertise. 

More specifically, an AI may identify a highly effective biological intervention without accounting for ecological impact or ethical acceptability. A materials optimization system may prioritize performance while ignoring environmental cost. The outputs are impressive, but the responsibility for consequences remains human. For this kind of expert, the future is not about accelerating experimentation alone. It is about becoming the moral and strategic decision-maker, the one who decides which discoveries are worth advancing and under what conditions. Their value increasingly lies in ethical judgment, long-term thinking, and responsibility for outcomes that extend well beyond immediate success.This shift comes with important and significant consequences. In the long run, we will likely see a decline in demand for experts whose roles rely almost exclusively on routine cognitive work. For many professionals, this means actively rethinking career paths and embracing change management rather than resisting it. A new core skill emerges across domains: the ability to supervise AI systems, whether through HITL or HOTL models. Experts must learn how to validate outputs, detect failure modes, understand model limits, and intervene responsibly.

At the same time, experts carry a new obligation: ensuring that the AI systems they work with are capable, effective, and governed properly. This requires robust AI governance frameworks, validation processes, and institutional oversight. We do not move forward by using AI blindly; we move forward by building AI systems that are trustworthy enough to carry real responsibility, under human supervision. Crucially, AI should not consume expert time; it should free it. By embedding AI into daily workflows, repetitive and preparatory work can be automated, allowing experts to focus on what humans still do best: resolving ambiguity, making ethical trade-offs, and pushing toward the next meaningful breakthrough. Automation and discovery are not opposites; they form a cycle.

In essence, the AI pipeline we need is one where human expertise remains central, while the path toward high-risk, high-impact, AI-assisted decision-making becomes faster, safer, and more effective and efficient. The age of the expert is not ending, but it is undergoing its most profound transformation yet. With AGI and ASI still distant prospects, the immediate reality is close collaboration with increasingly powerful narrow AI and capable systems. It is the responsibility of today’s experts to ensure those systems truly add value. When experts can confidently claim that the AI they use is effective and capable, they can let go of the repetitive activities that take away a large amount of their time and energy, and focus on what really matters and what they have been educated for. That is how we unlock progress. And ultimately, that is what we are all aiming for: to move forward, faster.
(2/10)

Olympic Truce

The winners of the ancient Olympic Games were crowned with wreaths of wild olive trees. During the games, the Olympic Truce (ekecheiria) was observed — in time of warring city-states, this celebration of life and athleticism was sacred.
northern and southern koreans marching together at 2000 olympics image
Northern and southern koreans marching together at 2000 olympics
northern and southern koreans marching together at 2000 olympics image
Northern and southern koreans marching together at 2018 olympics
The Olympic Truce was revived in 1992. The truce goes beyond temporarily ceasing hostilities — its mission is “to harness the power of sport to promote peace, dialogue and reconciliation more broadly”.
logo of the olympic truce image
Logo of the olympic truce
(3/10)

Dove

I stand for life against death;
I stand for peace against war.
— Pablo Picasso, 1950, Peace Congress in Sheffield
As ubiquitous of a symbol as the olive trees, we’ve come to see doves as an embodiment of peace. Pure and innocent creatures that symbolize everything that is life — the sacred symbol of Aphrodite is often let out to flutter at weddings. Kamadeva, the god of love in Hindu mythology, is often depicted riding a dove.
Blue Dove of Peace, 1950s image
Blue Dove of Peace, 1950s
Dove of Peace, 1949 image
Dove of Peace, 1949
In Genesis, the dove appears to Noah after the flood as the harbinger of peace, and in the New Testament dove represents Holy Spirit.
Pablo Picasso’s Dove was chosen as the symbol for the World Peace Congress in 1949. His father bred pigeons (rock doves) and had taught young Pablo how to draw them. Since then, birds appeared throughout his body of work, from Child with a Dove, 1901 all the way to The Pigeons, 1957. The peace dove and The Pigeons were both inspired by Matisse and his exotic pigeons, and Picasso’s fourth child was named Paloma (Spanish for dove).
La Colombe, 1961 image
La Colombe, 1961
Henri Matisse, 1947 image
Henri Matisse, 1947
(4/10)

Hiroshima and Nagasaki

A clock from Hiroshima stopped at 08:15 AM image
A clock from Hiroshima stopped at 08:15 AM
Peace Memorial Ceremonies are held annually at Hiroshima and Nagasaki, on August 6th and 9th respectively. Both culminate with the release of doves.
Hiroshima Peace Memorial Park image
Hiroshima Peace Memorial Park
The bombings of Hiroshima and Nagasaki stand out as some of the most heinous singular acts of the bloody 20th century. The Nagasaki Atomic Bomb museum invites us to stroll through the remnants of past Japan militarism followed by the horrific consequences of military actions. The exhibition ends with appeals for the abolition of nuclear armaments.
Hiroshima mon Amour, movie posters image
Hiroshima mon Amour, movie posters
Hiroshima mon Amour, movie posters image
Hiroshima mon amour (1959) shows how we deal with events such as these. Shot mere 14 years after the war, it portrays a French actress arriving at Hiroshima to shoot an anti-war movie, and her brief affair with a Japanese man. One of the most significant post-war movies, it shows time shattered, and the difficulty of trying to comprehend and deal with the unthinkable atrocities that we live with. The passage of time and eventual closure and/or numbness feels like a betrayal — yet, it is inevitable.

We can move Forward Faster Driven by Experts
Amplified with AI

This white paper was brought to you by FwdFaster.
All copyrights are reserved to FwdFaster.