Skip to main content
Hello Visitor!     Log In
Share |

AI as the Catalyst of the New Paradigm of Science?



ARTICLE | | BY Pavel Luksha

Author(s)

Pavel Luksha

Get Full Text in PDF

Abstract

The rapid advancement of Artificial Intelligence (AI) is reshaping the landscape of scientific inquiry. This article examines the challenges confronting modern science, including knowledge fragmentation, diminishing returns of current paradigms, and the disconnect between science and society. It explores how AI offers solutions enhancing collaboration and integrating knowledge—as well as considers the possible “darker side” of using AI in research. However, the advent of AI in science not only enhances scientific research but also redefines the role of human researchers and fosters a new paradigm of human understanding. By embracing these transformations, a more cohesive and responsive scientific enterprise can emerge.

The saddest aspect of life right now is that science gathers
knowledge faster than society gathers wisdom.

Isaac Asimov

1. The Evolution of Science: Where Are We Now?

The modern institution of science is largely a product of the Western Industrial Age. While systemic studies of nature and society have existed for millennia—leading to the astonishing achievements of Ancient Egypt, China, and the Islamic Golden Age—the concept of creating a specialized social institution dedicated solely to the accumulation of knowledge for human purposes is a Western European innovation. Beginning in the 17th century, intentional research conducted in universities across Europe helped advance the military and manufacturing superiority of European states and fueled colonial expansion.

Over the next two centuries, science became a government-supported activity that elevated national prestige and produced new tools, materials, and medicines. World War I, and even more so World War II, were massively shaped by scientific minds, as scientifically engineered mechanisms, explosives, and chemical agents dominated the battlefield—culminating in the detonation of atomic bombs over Hiroshima and Nagasaki, weapons of unprecedented destructive potential crafted through scientific endeavor (Doel & Harper 2023).

During the same period, however, science also increasingly started to play a role in creating knowledge for economic innovation. High-tech companies like Bell Labs, Bayer, and IBM became synonymous with cutting-edge research and development (R&D), while universities accumulated capacity for applied research that led to a multitude of inventions becoming new commercial enterprises. Today’s scientific research is strongly coupled with economic development and is seen as a primary source of national economic competitiveness—and the ongoing research “race” between the US and China in AI, brain research, biotechnology, and energy technologies underscores this trend (Atkinson & Foote 2019).

However, the future of science as a social institution is clouded by mounting systemic challenges. Three critical and interconnected issues stand out: the fragmentation of scientific knowledge, diminishing returns of scientific research productivity, and the potential decline of scientific knowledge creation.

1.1. Fragmentation of Scientific Knowledge

Today, the vast accumulation of knowledge surpasses individual comprehension. For example, reading the 170 million volumes in the Library of Congress would require approximately 3,200 lifetimes. With over 2.5 million new scientific papers published each year across more than 30,000 journals, scientists struggle to stay updated even within their narrow specialties (Johnson et al. 2018). This situation leads to fragmentation, where research becomes siloed, and interdisciplinary integration diminishes. Despite multiple calls for interdisciplinary approaches to address complex global challenges, actual integration occurs infrequently—one can even say that interdisciplinary studies become isolated as their own field, without truly integrating the fields they derive from (Ledford 2015). The barriers include differing terminologies, methodologies, and evaluation criteria across disciplines.

Additionally, not all scientific information is accurate or reliable. The proliferation of predatory journals that prioritize quantity over quality undermines the integrity of scientific literature (Beall 2016). Fields like psychology and biomedicine have faced replication crises, where many studies’ results cannot be reproduced, calling into question the validity of significant portions of research (Open Science Collaboration 2015). Where results can be replicated, they are subject to a “half-life” of facts, with new findings overturning established understandings (Arbesman 2012).

The exponential accumulation of knowledge has led to a situation where no single individual can comprehend or integrate the vast breadth of information available. Stanisław Lem (2013) indicated that this accumulation of knowledge that nobody can fully assimilate leads to a “crisis of representation” or “the rupture of the conceptual frontline.” There is no single entity that can truly represent the full body of human knowledge—and the ideal of the Enlightenment, where science can provide a holistic and non-controversial worldview, has collapsed.

1.2. Diminishing Returns of Research Productivity

There are also growing societal concerns about the efficiency of research, as people outside academia frequently do not share its idea that “the meaning of life is the pursuit of knowledge.” The principle of “knowledge bang for the buck,” suggested by Charles Sanders Peirce, posits that the value of knowledge should be weighed against the resources expended to obtain it (Peirce 1879). In science’s early days, much was yet to be discovered, and the cost of inquiry was relatively low with exceptionally high benefits. For instance, Benjamin Franklin’s simple kite experiment led to the invention of the lightning rod, which has since saved countless structures. In recent decades, however, evidence suggests that despite increased investment in R&D, the rate of groundbreaking discoveries and productivity has diminished.

Bloom et al. (2020) demonstrated that research productivity has declined across various fields, including agriculture, medicine, and semiconductors. For instance, Moore’s Law—the doubling of transistors on integrated circuits approximately every two years—has been sustained, but the number of researchers required to achieve this has increased exponentially, indicating a steep decline in productivity per researcher.

In biomedical research, Scannell et al. (2012) observed an “Eroom’s Law” (Moore’s Law spelled backward), where the number of new drugs approved per billion U.S. dollars spent on R&D has halved approximately every nine years since 1950. Similarly, in physics, the discovery of fundamental particles requires enormous investments in large-scale facilities like the Large Hadron Collider, with diminishing probabilities of finding new particles (Hossenfelder 2018). Each additional increment of knowledge demands greater resources, time, and collaboration, often with less impactful results.

1.3. “Limits to Growth” of Science

Science, as a social institution, is expected to produce results that benefit society. Our prevailing civilizational paradigm rests on the concept of progress—the belief that the continuous accumulation of knowledge will invariably yield new technologies to enhance human welfare. Progressivist thought hinges on the notion that science will perpetually provide fresh solutions as long as research persists. But could this cornucopia of solutions be finite? Is scientific knowledge creation itself facing limitations?

As argued above, the cost of new knowledge creation grows, and its benefits become less immediately apparent to the public. Scientific productivity appears to be slowing; even in well-funded fields, impactful publications decline yearly as the field’s size impedes new ideas (Chu & Evans, 2021). In some areas like physics, the development of fundamental science has been stalled since the 1980s (Baez 2023).

Nobel laureate Steven Weinberg (2012) argued that, with research becoming more specialized and frequently unreliable, scientific knowledge creation risks decoupling from the needs of the economy and society, and innovations may remain confined within academic circles without translating into practical applications or informing policy decisions. Society, corporations, and nations might challenge the future funding of expensive research projects, suggesting resources could be diverted to more practical needs. Scientists’ remuneration could decline, careers in science might lose allure, and the number of researchers could decrease significantly.

Historical precedents illustrate how societal shifts can impact scientific progress. After the collapse of the Soviet Union, the funding of fundamental and applied science in Russia, Ukraine, Armenia, and other post-Soviet countries was massively reduced, and scientific activity waned. Companies that emerged in the territory preferred to procure Western technologies instead of investing in their own R&D, and governments attached little importance to scientific activities. Many researchers emigrated, and the renowned scientific community dwindled within two decades—with the evident erosion of the ability to generate intricate knowledge (Graham & Dezhina 2008).

"AI can act as an ‘artificial Aristotle,’ capable of synthesizing vast amounts of knowledge across fields into a consistent ‘knowledge tapestry’."

This phenomenon resembles Joseph Tainter’s concept of societal collapse due to diminishing returns on complexity, where established institutions become counterproductive, leading to a “reset” in societal complexity (Tainter 1988). While these possible limitations to scientific and technological progress still lie ahead, signs suggest a potential saturation point determined not only by the capacity to advance but also by society’s willingness to support these endeavors—or even due to cultural shifts such as the ones that occurred in Hellenistic Alexandria in the early Christian age when religious fundamentalists repressed scientific debates and research (Freeman 2003). In early 2025, one can clearly see how the influence of populist or anti-scientific worldviews can significantly disrupt scientific advancement—as demonstrated by the first decisions of the new Trump administration in the US, where skepticism about climate science and public health led to critical research lines being defunded or dismissed. We should not take continued scientific development for granted, as it hinges upon a sustained ability of science to produce results that are valued by society.

2. AI: Deus Ex Machina (or Not)?

The rise of Artificial Intelligence (AI) has taken over the front pages and the minds of academics and decision-makers. The quest for automating knowledge production is hardly new—as early as the 13th century, philosopher and theologian Ramon Llull envisioned a logical device to discover truth through mechanical means. In 1945, Vannevar Bush proposed the concept of the “memex,” a hypothetical machine that would organize and retrieve information through associative links, laying the groundwork for future information systems and the Internet (Bush 1945).

Machine learning has brought these concepts to an entirely new level. Analytical capabilities of Large Language Models (LLMs) like GPT-4 and Google’s Gemini already match or outperform the capacity of an average PhD graduate on complex reasoning benchmark tests (Nori et al. 2023). LLMs show the capacity to formulate hypotheses, suggest investigative questions, and bridge ideas from distinct knowledge fields (Kumar et al. 2024). However, AI models are not limited by human memory volume, attention span, communication capacity, or the need for regular rest, so their evolution will continue.

Can AI address the unfolding crisis of science, overcoming knowledge fragmentation and enhancing research work to mitigate the current trend of declining research productivity? Let us consider some possibilities of how AI will influence science in the foreseeable future, even before the hypothetical Artificial General Intelligence could arise.

2.1. AI-Assisted Synthesis of Science

In the past, the fragmentation of knowledge across specialized disciplines required human synthesizers such as Aristotle or the 18th-century French Encyclopédistes. AI can act as an “artificial Aristotle,” capable of synthesizing vast amounts of knowledge across fields into a consistent “knowledge tapestry.” Already now, AI-driven platforms can automate the aggregation of relevant literature, saving researchers time and reducing information overload (Fortunato et al. 2018). As AI capabilities evolve, further integration can be achieved in several ways:

  • Mapping Knowledge Fields: AI systems can create knowledge maps that locate existing bodies of knowledge in semantic space, connect disparate research findings, highlight missing links, and facilitate transdisciplinary integration. Tools like semantic knowledge graphs and AI-driven literature analysis platforms enable researchers to visualize relationships between concepts, publications, and researchers.
  • Discovering Knowledge Gaps: AI algorithms can detect patterns and correlations that might elude human researchers due to the sheer volume of data. Moreover, AI can assist in identifying gaps in knowledge, suggesting areas where further research is needed. As a recent example, the self-organizing knowledge networks can be produced by agentic deep graph reasoning, allowing for AI-driven autonomous hypothesis generation and scientific inquiry (Buehler 2025). This capability is crucial for strategic planning in research funding and policy-making. By providing a holistic view of the scientific landscape, AI helps overcome the silos that impede the integration of knowledge—and even overcome the biases and constraints that often arise in the setting of scientific priorities by like-minded communities.
  • Design of Scientific Explorations: By “connecting the dots,” AI can serve as a facilitator of collaboration and a builder of teams or networks. Systems like Microsoft’s Academic Graph and Semantic Scholar utilize AI to process millions of papers, helping identify emerging trends and potential collaborations. Recommendation systems can match researchers with complementary expertise, fostering interdisciplinary
    projects—or even bring disagreeing scientists into “adversarial collaborations” which can produce “intense intellectual competition designed to winnow false claims” (Clark et al. 2022).

2.2. Enhancing Research Work

One of the largest problems of modern science is the constantly increasing cost of experimentation. AI-enhanced tools like virtual laboratories allow researchers to conduct simulated experiments “in silico,” reducing the need for “in vitro” and “in vivo” experiments. These virtual labs help simulate complex systems, enabling scientists to test theories and observe outcomes rapidly and at lower costs. For instance, in drug discovery, AI models predict molecular interactions, accelerating the identification of potential therapeutics (Chen et al. 2018)—and more recently, also model how specific drugs impact patients, even in the absence of clinical studies, such as in the Enchant system. In materials science, AI algorithms design new materials with desired properties by exploring vast chemical spaces (Butler et al. 2018).

Moreover, AI assists in automating routine tasks such as data collection, processing, and initial analysis, freeing researchers to focus on interpretation and innovation. Advanced AI systems can analyze complex datasets, identify patterns, and recognize anomalies that might indicate new phenomena (Aldoseri et al. 2023). Furthermore, AI can even help generate plausible explanations for these irregularities, serving as a collaborator in hypothesizing and conceptual design.

"In the future, the evolution of AI can bring about the ‘Fifth Paradigm,’ in which data-driven science will enter loops of self-accelerated learning driven by AI."

What is more important is that these processes occur not through human-like reasoning but through sheer data processing. In the words of Rich Sutton (2019), “the biggest lesson from 70 years of AI research is that general methods that leverage computation are ultimately the most effective,” while “the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.” The machine does not “understand” reality; it simply “grasps” patterns in the set of observations—and this is not how humans understand reality, even though AI models initially aimed to reproduce the way the human brain operates. For some reason, however, in the end, this strategy proves to be a better way of actually “understanding” the reality—called by AI researchers “the unreasonable effectiveness of data” (Halevy et al. 2009).

What we see is that AI is transforming the paradigm of knowledge creation, moving from classical hypothesis-driven approaches to data-driven science. Jim Gray called this shift the “Fourth Paradigm” of science—data-intensive scientific discovery (Hey, Tansley, & Tolle 2009) (the previous three being experiment-driven, theory-driven, and computation-driven paradigms of research). An example of data-intense research is the Nobel-prize-winning DeepMind’s AlphaFold, which predicted the structures of over 200 million proteins (virtually all known protein sequences), reducing the time required for determining protein structures from months or years (using traditional experimental methods like X-ray crystallography) to mere minutes. This accelerated progress has lowered barriers to drug discovery, disease research, and biotechnology innovation, catalyzing breakthroughs that would otherwise have taken decades of experimental research (Service 2024).

In the future, the evolution of AI can bring about the “Fifth Paradigm,” in which data-driven science will enter loops of self-accelerated learning driven by AI. Within this paradigm, AI systems would not only analyze data but also independently generate hypotheses, design experiments, and conduct them in virtual labs (“in silico”) or in Internet-of-Things-assisted physical environments, and iteratively refine their understanding based on outcomes, continuously improving scientific insight.

2.3. Integrating Science into Decision-Making

The traditional separation between knowledge creation and knowledge utilization is diminishing. As data science becomes integral to various aspects of society, AI models are increasingly embedded within governance and decision-making processes. Governments and organizations are adopting digital twins—complex models of reality that simulate physical systems—to inform policy and operational decisions (Tao et al. 2019). For example, urban planners use AI-driven “digital twin” models to optimize transportation networks, energy consumption, and emergency responses (Batty 2018).

Predictive modeling, powered by AI, enables decision-makers to forecast outcomes and assess the potential impacts of policies or interventions. In public health, AI models predict disease outbreaks and evaluate the effectiveness of containment strategies (Ferguson et al. 2020)—for instance, an AI model for Rio de Janeiro predicted the 2024 dengue fever spikes months before traditional models would (Rockefeller Foundation, 2024). In environmental management, AI assists in monitoring climate change indicators and modeling ecological scenarios (Reichstein et al. 2019).

By integrating science directly into decision-making frameworks, AI bridges the gap between research and application. This integration ensures that policies are informed by the most current and comprehensive data available, enhancing their effectiveness and responsiveness. The convergence of AI, data science, and governance signifies a shift toward evidence-based decision-making, where the division between knowledge creation and utilization becomes increasingly blurred (Provost & Fawcett 2013). Science becomes an active component of societal development rather than an isolated institution.

This process mitigates the concern that science would ever become “irrelevant” to society, which may want to prioritize other forms of activities over knowledge creation. In a world where every organization and household uses AI systems and digital twins, knowledge creation will become part of everything. On the other hand, it will also clearly mean the end to the “science as we know it”—the disappearance of many existing organizational forms and roles within the social institution of science. What new forms of knowledge creation and ways of knowing will be adopted by human societies in this case is a question worth exploring.

3. Curb Your Enthusiasm: The Darker Side of AI

While AI holds great promise for transforming science, it should be recognized that it has a multitude of existing and potential limitations as a vehicle for scientific development.

3.1. Biases and Blind Spots

Nora Bateson emphasizes that data collection is not observer-neutral; “cold” data always emerges from a “warm” system of relationships and contexts (Bateson 2016). AI models are trained on existing data, which may contain inherent biases. For example, IBM’s Watson for Oncology faced criticism for providing treatment recommendations based on training data primarily from well-respected cancer centers, which may not represent diverse patient populations (Ross & Swetlitz 2017). Similarly, AI algorithms used in hiring have exhibited gender and racial biases due to being trained on historical hiring data that reflects societal prejudices (Raub 2018). In scientific research, AI-enhanced tools could potentially cement existing biases within the literature and datasets. There is evidence that AI systems may reinforce prevailing paradigms and underrepresent minority viewpoints or novel hypotheses (Bender et al. 2021).

Additionally, these biases make AI systems susceptible to blind spots present in prior literature and datasets. AI, at least in its current form, tends to align with consensual knowledge and may struggle to formulate paradigm-shifting ideas based solely on existing evidence. This limitation could hinder scientific innovation if AI tools overly influence research directions without critical human oversight.

3.2. Vehicle for Undermining Academic Integrity and Promoting Untruth

AI’s capabilities also present risks to academic integrity. The ease of generating convincing but fabricated content can lead to the spread of misinformation and the erosion of trust in scientific literature. Generative AI models can produce plausible-sounding but incorrect or entirely fabricated research articles, data, and citations (van Dis et al. 2023). AI-generated texts may include “hallucinations,” where the model produces false information presented as factual, including nonexistent references or fabricated research findings that can deceive readers and reviewers not deeply familiar with the subject matter (Marcus & Davis 2019).

Moreover, malicious actors might deliberately use AI to create fraudulent studies or data to mislead competitors, sway public opinion, or secure funding unethically. The case of the fictitious researcher “Ike Antkare,” whose fabricated publications were used to highlight vulnerabilities in academic metrics, exemplifies how easily the system can be manipulated (Labbé & Labbé 2013).

The proliferation of AI-generated misinformation can lead to a “triumph of ignorance,” where falsehoods spread more rapidly than verifiable truths, undermining the foundation of scientific knowledge (Vosoughi, Roy, & Aral 2018). Addressing this issue requires developing robust verification tools, including AI-generated content detection protocols, fostering a culture of critical evaluation, and adapting academic standards to account for AI’s influence.

3.3. Devaluing Human Reasoning Capacity

The long-term effect of AI in science could be detrimental if AI is positioned as an alternative rather than an enhancement to human research. Overreliance on AI may lead to several concerns:

  • Erosion of Critical Thinking Skills: Dependence on AI for analysis and decision-making could diminish researchers’ abilities to critically assess data and methodologies, leading to intellectual complacency.
  • Loss of Creativity and Intuition: Human intuition and creativity are vital for scientific breakthroughs. AI lacks the experiential and emotional components that drive human innovation. Overemphasis on AI might stifle these human qualities (Floridi & Cowls 2019).
  • Reduction in Skill Development: Future scientists may become less proficient in fundamental research skills if they rely heavily on AI tools, potentially leading to a workforce ill-equipped to advance science independently (Susskind & Susskind 2015).

The risks of overreliance on AI are increasingly recognized, particularly regarding its potential impact on critical thinking and creativity among students and emerging scholars (O’Connor 2025). To mitigate these risks, it is essential to maintain a balance where AI serves as a tool to augment human capabilities rather than replace them.

4. The Role of Human Researchers in the Age of AI

As AI actively permeates the field of knowledge creation—and especially as the “Fifth Paradigm” of knowledge creation becomes fully realized—the role of human researchers and collectives in the coming decades warrants careful examination. Will humans be reduced to mere “sensors” and “effectors” within AI-governed systems, or is there a unique and indispensable role for human collectives in the future of science?

Firstly, it is important to acknowledge that this question is not merely hypothetical but increasingly relevant. The role of AI in human research will continue to expand, and for the foreseeable future, this integration will be highly beneficial. Human researchers and collectives will incorporate AI at various levels—from literature review and knowledge mapping to AI-assisted data analysis, hypothesis formulation, and the creation of project reports and academic articles. As individual researchers and teams learn to effectively utilize AI, those who do not adopt these tools are likely to become increasingly marginalized. Their work may be conducted and disseminated more slowly, be less integrative, miss critical hypotheses and opportunities, and require higher compensation due to inefficiencies.

As knowledge continues to grow in complexity, diversity, and compartmentalization, integrating it into human endeavors becomes a critical challenge. The classical division between theoretical (fundamental) and practical (applied) knowledge creation is unlikely to be maintained in the era of data-driven science. With significant amounts of new knowledge produced by hybrid human-AI systems, there will be a rising demand for “facilitators of knowledge-creating dialogues” or “knowledge weavers”—roles previously filled by educators, consultants, and interdisciplinary researchers. These facilitators will bridge the gap between specialized knowledge and practical application, ensuring that new insights are effectively integrated into societal progress.

Researchers and collectives will increasingly become “human-AI centaurs,” enhanced by AI and digital solutions. These high-performance teams will be facilitated by algorithms that enable dynamic team assembly with complementary talents, leveraging the potential of hybrid intelligence—collaborative human-machine systems combining individual perception, collective co-creation, and machine learning (Dellermann et al. 2019). In such hybrid intelligence groups, humans will lead in goal setting, decision making, and creativity, while AI assists with data processing, routine tasks, and enhancing group performance. AI can also be used to foster consensus between the diverse perspectives, as exemplified by the experiments with “Habermas Machine”, an AI mediation tool that was able to foster consensus during democratic debates on politically divisive issues involving over 5000 UK citizens (Tessler et al. 2024).

"Human researchers and collectives are not destined to become obsolete in the age of AI but will increasingly focus on roles emphasizing the unique human capacities for wisdom, ethical judgment, creativity, and the ability to navigate complex social and moral landscapes."

According to physicist Dirk Helbing (2019), within complex environments, collective intelligence systems are able to strategically outperform Artificial Intelligence over time, particularly in addressing challenges faced by human societies and the planet. Helbing argues that while AI excels in processing information and optimizing specific tasks, human collectives possess adaptability, creativity, and ethical reasoning that are crucial in navigating complex, dynamic systems. Therefore, the enhancement—not replacement—of human collective intelligence seems the most plausible and efficient way of deploying AI in science.

Moreover, there is a crucial distinction between human collectives and AI systems in terms of tackling complex challenges and finding effective solutions. Humans possess wisdom—the ability to utilize knowledge and reasoning coupled with insight and ethical judgment gained through life experience and reflection. Collective wisdom, manifested in various forms such as councils, think tanks, and interdisciplinary collaborations, has been a vehicle for the evolutionary success of humankind. In the future, it will be even more critical, as it enables ethical decision-making with an understanding of the long-term consequences of actions, offers a holistic view of the future that considers the well-being of future generations, and provides resilient and integrative perspectives in the face of complexity.

The concept of collective hybrid wisdom involves combining human insights with AI capabilities to promote deeper understanding and empathy. Frameworks for such integration are emerging, including inclusive deliberation platforms, ethics-aware AI, and wisdom-ranking mechanisms. For instance, AI can facilitate large-scale deliberations by organizing and summarizing inputs from diverse participants, ensuring that multiple perspectives are considered in decision-making processes.

Through the synergy of collective hybrid wisdom communities, enduring forms of collective consciousness can arise—what the author calls in his work the “Forest of Consciousness.” Visionaries such as Kevin Kelly, Valentin Turchin, Peter Russell, and Sri Aurobindo have suggested that this sustained collective consciousness marks the anticipated next step in the evolution of Earth’s living matter. Collaboration among hybrid wisdom-driven collectives working in knowledge creation could be the first step in that direction.

Human researchers and collectives are not destined to become obsolete in the age of AI but will increasingly focus on roles emphasizing the unique human capacities for wisdom, ethical judgment, creativity, and the ability to navigate complex social and moral landscapes.

5. Evolution of Worldviews

Embracing collective hybrid wisdom leads us to a critical reflection on the specialization of humans in an era dominated by data-driven knowledge and AI-enhanced agents. If machines excel at processing information and identifying patterns, what distinct roles will humans play? The answer lies in developing new ways of knowing—cultivating holistic perspectives, embracing complexity, and fostering wisdom that integrates emotional, ethical, and experiential dimensions. As indicated above, AI will probably serve more convergent rather than divergent ways of comprehending the world. If we recognize that the majority of global challenges—from social injustice to climate and biodiversity crises—are produced by the way we understand reality (Freinacht 2017), then the only way to proceed is to evolve the existing paradigm.

AI can assist us in this endeavor but cannot do it for us. What is needed is a paradigm shift in how we perceive knowledge—moving beyond traditional scientific cognition to a more integrated, human-centric approach that values intuition, creativity, and ethical considerations. It is in this area that we can form a different context for AI cognition systems to operate—the very “shell” of “warm data” that Nora Bateson talks about: new ways of looking at things so that the knowledge generated works toward the goal of long-term human flourishing in harmony with the world.

A new paradigm can be derived from the premise of discovering the unique human contribution to the evolution of our species, the biosphere, the planet, and the universe (Laszlo 2023). Should we continue seeing ourselves from an anthropocentric perspective, assuming that we are on a constant quest to conquer material forms and expand our presence in the universe? Or does the age of artificial intelligence give us an opportunity to discover that we are surrounded by a multitude of non-human intelligences, including perhaps the super-intelligence of planet Earth, as Adam Frank and his colleagues suggest (Frank et al. 2022)?

We may discover that this perspective is not novel—and remains common knowledge in indigenous communities across the world. While many such communities may not, on the surface, have superior ways of material production, they are considerably more efficient in regulating their socio-emotional life, embracing the creativity of their members, enriching their daily lives, and serving their collective well-being (Mackean et al. 2022). They are also notably more efficient in harmonizing their relationships with natural environments; for example, research by Julia Watson (2019) shows that some of the most sustainable solutions in construction can be provided within “traditional ecological knowledge,” successfully tested and refined over countless generations.

The objective is not simply to “plug” ancient knowledge into our modern context—but to embrace a holistic view that positions scientific, pre-scientific, and post-scientific thinking as essential components of a comprehensive framework. The relationship between different forms of knowledge is symbiotic; each mode of thought represents a valuable “piece of the jigsaw puzzle” that enables us to establish a profound and meaningful connection with the intricate and living world around us. This integrative perspective encourages us to recognize collective wisdom, embrace multidimensional learning, foster cross-disciplinary dialogue, and cultivate intellectual humility by exploring alternative forms of knowledge.

Evolving beyond current paradigms calls for a transition to “post-scientific thinking” or “Science 2.0,” as conceptualized by Otto Scharmer (2018). This approach views the researcher as part of the observing system; accordingly, it demands self-awareness and a deep understanding of one’s role within the broader context of the world, including understanding the purpose and impact of research (which inevitably becomes action-oriented). In this “embedded science,” the researcher is invited to engage in a profound journey of self-discovery and discovery of others, recognizing that personal awareness and transformation are integral to the process of knowledge creation. Ultimately, Science 2.0 transcends the boundaries of traditional science to integrate personal growth, community engagement, and experiential learning into the process of knowledge creation. This paradigm shifts from a narrow focus on theoretical knowledge to a holistic integration of thinking, feeling, and doing—where the separation between these aspects is dissolved, and they are seen as interconnected dimensions of human experience.

Integrating AI with new and traditional ways of knowing opens up the long-term potential for what the author calls “a meta-language of living complexity” (Luksha 2023)—an integrative approach where AI tools can help the humanity move towards complexity-embracing perspective grounded in the organic, interconnected nature of reality itself. AI real-time “digital twin” modeling could help manage the delicate interplay between ecological dynamics, technological infrastructure, and human needs—to help establish “techno-anthropo-bio-cenoses”, harmonious and regenerative communities in which humans, technology, and biological organisms coexist. Importantly, in techno-anthropo-bio-cenoses, the decision-making power is not handed to AI; instead, AI facilitates the co-design and co-governance by human and non-human stakeholders alike, including technology and various forms of life.

As Audre Lorde once said, “The master’s tools will never dismantle the master’s house,” highlighting that existing socio-economic and political structures are largely maintained by dominant worldviews. If the new knowledge paradigm is to become a conduit for a new socio-cultural reality—one that can help us overcome the unfolding planetary crisis—it must diverge from existing norms and perspectives. Profound transitions depend on shifting the foundations of our thinking and communication patterns. This new paradigm recognizes intrinsic complexity and adopts a holistic, organic, transdisciplinary worldview. Creating this is primarily the role of humans, not thinking machines.

6. Conclusion

With the rapid ascent of AI, we are experiencing one of the most radical revolutions in the means of intellectual work—perhaps even more dramatic in its impact than the Gutenberg press, and happening at a rate unseen in human history. The arrival of this technology in the realm of science presents us with both unparalleled opportunities and profound challenges. AI offers remarkable tools to enhance research, synthesize knowledge, and integrate scientific insights into decision-making. Yet, without careful stewardship, we risk entrenching biases, eroding academic integrity, and devaluing the very human capacities—creativity, intuition, and ethical reasoning—that have driven scientific progress for centuries. If left unchecked, the proliferation of AI could exacerbate the very problems we hope it might solve.

As new AI-supported research methods become more widespread, the organization of science, the role of researchers, and even our conception of what constitutes “good knowledge” will change. This pivotal moment calls for a paradigm shift—a reevaluation of what it means to know, understand, and innovate. We must move beyond the confines of traditional scientific paradigms and embrace a more holistic, transdisciplinary approach that values emotional intelligence, ethical considerations, and experiential wisdom alongside data and analytics. The cultivation of collective hybrid wisdom, in all its varied forms, and setting pathways towards the emergence of “techno-anthropo-bio-cenoses” with the use of AI is not a luxury but a necessity if we are to navigate the intricate web of global challenges that defy simple solutions.

With AI permeating every facet of society, the role of human researchers becomes more critical, not less. It is up to us to ensure that AI serves as a catalyst for deeper insight rather than a substitute for critical thinking. We must foster environments where human creativity and ethical judgment are amplified by AI’s capabilities, not overshadowed by them. The future of science—and indeed, the future of humanity—hinges on our ability to consciously integrate AI into the fabric of our knowledge systems while preserving the essence of what makes us uniquely human. This demands intentional action: redefining educational models, reshaping research institutions, and cultivating a culture that prioritizes wisdom over mere information accumulation. Policymakers, educators, researchers, and global organizations must collaborate to shape an AI-empowered scientific enterprise that not only advances knowledge but also enhances societal and planetary well-being.

The question is not whether AI will change science—it already is—but whether we will guide this change toward the collective good. The time for reflection and action is now.

References

  1. Aldoseri, Abdulaziz, Khalifa N. Al-Khalifa and Abdel Magid Hamouda. 2023. “Re-Thinking Data Strategy and Integration for Artificial Intelligence: Concepts, Opportunities, and Challenges.” Applied Sciences 13 (12): 7082.
  2. Arbesman, Samuel. 2012. The Half-Life of Facts: Why Everything We Know Has an Expiration Date. New York: Current.
  3. Atkinson, Robert D., and Caleb Foote. 2019. Is China Catching Up to the United States in Innovation? Washington, DC: Information Technology and Innovation Foundation.
  4. Baez, John C. 2023. “The Future of Physics.” Santa Fe Institute Community Lecture, May. Accessed October 30, 2024. https://johncarlosbaez.wordpress.com/2023/05/31/the-future-of-physics/.
  5. Bateson, Nora. 2016. Small Arcs of Larger Circles: Framing Through Other Patterns. Axminster, Devon: Triarchy Press.
  6. Batty, Michael. 2018. “Digital Twins.” Environment and Planning B: Urban Analytics and City Science 45 (5): 817–820.
  7. Beall, Jeffrey. 2016. “Predatory Journals: Ban Predators from the Scientific Record.” Nature 534 (7607): 326.
  8. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23.
  9. Bloom, Nicholas, Charles I. Jones, John Van Reenen, and Michael Webb. 2020. “Are Ideas Getting Harder to Find?” American Economic Review 110 (4): 1104–44.
  10. Buehler, Markus. 2025. “Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks”. arXiv preprint arXiv:2502.13025. https://www.arxiv.org/abs/2502.13025
  11. Butler, Keith T., David W. Davies, Hugh Cartwright, Olexandr Isayev, and Aron Walsh. 2018. “Machine Learning for Molecular and Materials Science.” Nature 559 (7715): 547–55.
  12. Bush, Vannevar. 1945. “As We May Think.” The Atlantic Monthly, July. https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/.
  13. Chu, Johan, and James A. Evans. 2021. “Slowed Canonical Progress in Large Fields of Science.” Proceedings of the National Academy of Sciences 118 (41): e2021636118.
  14. Clark, Cory, Costello, Thomas, Mitchell, Gregory & Tetlock, Philip. 2022. “Keep your enemies close: Adversarial collaboration will improve behavioral science”. Journal of Applied Research in Memory and Cognition, 11: 1-18.
  15. Dellermann, Dominik, Philipp Ebel, Matthias Söllner, and Jan Marco Leimeister. 2019. “Hybrid Intelligence.” Business & Information Systems Engineering 61 (5): 637–43.
  16. Doel, Ronald E., and Kristine C. Harper. 2023. “Science and Technology.” In The Oxford Handbook of World War II, edited by G. Kurt Piehler and Jonathan A. Grant, 431–47. New York: Oxford University Press.
  17. Ferguson, Neil M., Daniel Laydon, Gemma Nedjati-Gilani, Natsuko Imai, Kylie Ainslie, Marc Baguelin, et al. 2020. “Impact of Non-Pharmaceutical Interventions (NPIs) to Reduce COVID-19 Mortality and Healthcare Demand.” Imperial College COVID-19 Response Team, March 16.
  18. Floridi, Luciano, and Josh Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1 (1).
  19. Fortunato, Santo, Carl T. Bergstrom, Katy Börner, James A. Evans, Dirk Helbing, Staša Milojević, Alexander M. Petersen, et al. 2018. “Science of Science.” Science 359 (6379): eaao0185.
  20. Frank, Adam, David Grinspoon, and Sara Walker. 2022. “Intelligence as a Planetary Scale Process.” International Journal of Astrobiology 21 (2): 47–61.
  21. Freeman, Charles. 2003. The Closing of the Western Mind: The Rise of Faith and the Fall of Reason. New York: Vintage Books.
  22. Freinacht, Hanzi. 2017. The Listening Society: A Metamodern Guide to Politics, Book One. Metamoderna.
  23. Graham, Loren R., and Irina Dezhina. 2008. Science in the New Russia: Crisis, Aid, Reform. Bloomington: Indiana University Press.
  24. Halevy, Alon, Peter Norvig, and Fernando Pereira. 2009. “The Unreasonable Effectiveness of Data.” IEEE Intelligent Systems 24 (2): 8–12.
  25. Helbing, Dirk, ed. 2019. Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution. Cham: Springer.
  26. Hey, Tony, Stewart Tansley, and Kristin Tolle, eds. 2009. The Fourth Paradigm: Data-Intensive Scientific Discovery. Redmond, WA: Microsoft Research.
  27. Hossenfelder, Sabine. 2018. Lost in Math: How Beauty Leads Physics Astray. New York: Basic Books.
  28. Johnson, Rob, Anthony Watkinson, and Michael Mabe. 2018. The STM Report: An Overview of Scientific and Scholarly Publishing. 5th ed. The Hague: International Association of Scientific, Technical and Medical Publishers.
  29. Kumar, Suman, Tirthankar Ghosal, Vishal Goyal, and Asif Ekbal. 2024. “Can Large Language Models Unlock Novel Scientific Research Ideas?” arXiv preprint arXiv:2409.06185. https://arxiv.org/abs/2409.06185.
  30. Labbé, Cyril, and Dominique Labbé. 2013. “Duplicate and Fake Publications in the Scientific Literature: How Many SCIgen Papers in Computer Science?” Scientometrics 94 (1): 379–96.
  31. Laszlo, Ervin. 2023. The Survival Imperative: Upshifting to Conscious Evolution. New York: Light on Light Press.
  32. Ledford, Heidi. 2015. “Team Science.” Nature 525 (7569): 308–11.
  33. Lem, Stanisław. 2013. Summa Technologiae. Translated by Joanna Zylinska. Minneapolis: University of Minnesota Press.
  34. Luksha, Pavel. 2023. The Next Hundred Years: A Bridgeway Across the Decisive Century. https://next100years.world/
  35. Mackean, Trish, Maryanne Shakespeare, and Matthew Fisher. 2022. “Indigenous and Non-Indigenous Theories of Wellbeing and Their Suitability for Wellbeing Policy.” International Journal of Environmental Research and Public Health 19 (18): 11693.
  36. Marcus, Gary, and Ernest Davis. 2019. Rebooting AI: Building Artificial Intelligence We Can Trust. New York: Pantheon Books.
  37. Nori, Harsha, Nicholas King, Scott M. McKinney, Devesh Carignan, and Eric Horvitz. 2023. “Capabilities of GPT-4 on Medical Challenge Problems.” arXiv preprint arXiv:2303.13375. https://arxiv.org/abs/2303.13375.
  38. O’Connor, Sarah. 2025. “Students must learn to be more than mindless ‘machine-minders’”. Financial Times, March 4, 2025. https://www.ft.com/content/82d59679-0985-4c07-9416-06a0bec6e16a.
  39. Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349 (6251): aac4716.
  40. Peirce, Charles S. 1879. “Note on the Theory of the Economy of Research.” The Johns Hopkins University Circulars 1 (4): 2–3.
  41. Provost, Foster, and Tom Fawcett. 2013. “Data Science and Its Relationship to Big Data and Data-Driven Decision Making.” Big Data 1 (1): 51–59.
  42. Raub, Matthew. 2018. “Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices.” Arkansas Law Review 71: 529–70.
  43. Reichstein, Markus, Gustau Camps-Valls, Bjorn Stevens, Markus Jung, Joachim Denzler, et al. 2019. “Deep Learning and Process Understanding for Data-Driven Earth System Science.” Nature 566 (7743): 195–204.
  44. Rockefeller Foundation. 2024. Urban Climate-Health Action: A New Approach to Protecting Health in the Era of Climate Change. https://www.rockefellerfoundation.org/reports/urban-climate-health-action-a-new-approach-to-protecting-health-in-the-era-of-climate-change/
  45. Ross, Casey, and Ike Swetlitz. 2017. “IBM Pitched Its Watson Supercomputer as a Revolution in Cancer Care. It’s Nowhere Close.” STAT News, September 5. https://www.statnews.com/2017/09/05/watson-ibm-cancer/.
  46. Scharmer, C. Otto. 2018. The Essentials of Theory U: Core Principles and Applications. Oakland, CA: Berrett-Koehler Publishers.
  47. Scannell, Jack W., Alex Blanckley, Helen Boldon, and Brian Warrington. 2012. “Diagnosing the Decline in Pharmaceutical R&D Efficiency.” Nature Reviews Drug Discovery 11 (3): 191–200.
  48. Service, Robert. 2024. “AI tools set off an explosion of designer proteins”. Science 386(6719): 260-261.
  49. Sutton, Richard. 2019. “The Bitter Lesson.” http://www.incompleteideas.net/IncIdeas/BitterLesson.html.
  50. Susskind, Richard, and Daniel Susskind. 2015. The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford: Oxford University Press.
  51. Tainter, Joseph. 1988. The Collapse of Complex Societies. Cambridge: Cambridge University Press.
  52. Tao, Fei, He Zhang, Ang Liu, and A. Y. C. Nee. 2019. “Digital Twin–Driven Product Design Framework.” International Journal of Production Research 57 (12): 3935–53.
  53. Tessler, Michael, Bakker, Michael, et al. 2024. “AI can help humans find common ground in democratic deliberation”. Science, 386 (6719).
  54. van Dis, Emiel A. M., Johan Bollen, Willem Zuidema, Robert van Rooij, and Claudi L. H. Bockting. 2023. “ChatGPT: Five Priorities for Research.” Nature 614 (7947): 224–26.
  55. Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–51.
  56. Watson, Julia. 2019. Lo—TEK: Design by Radical Indigenism. Cologne: Taschen.
  57. Weinberg, Steven. 2012. “The Crisis of Big Science.” The New York Review of Books, May 10. https://www.nybooks.com/articles/2012/05/10/crisis-big-science/.

About the Author(s)

Pavel Luksha

Founder & Director of Global Education Futures; Founder of University for the Earth; Associate Fellow, WAAS; Board Member, World University Consortium