Humans have evolved the capability to transcend the world our senses perceive and conjure alternate realities seemingly out of thin air. We call this capability imagination.
Like everything that makes us human, imagination is the result of evolution — a process geared toward advancements that favor survival — standing to reason that it is essential. By enabling us to envision potential outcomes, anticipate uncertainties, and adapt our strategies, imagination enhances our ability to navigate complex environments and challenges.
In our work at Flagship, it is imagination’s ability to birth transformative ideas that we most prize, liberating us from the constraints of reason to reconsider tomorrow’s possibilities. We call our approach emergent discovery — a stage-gated and systematic process akin to Darwinian evolution. Through a structured process of intellectual leaps, iterative search and experimentation, and selection, we pressure test ideas until they cannot be disproved. The seeds of each innovation — from microbiota-based therapeutics, mRNA vaccines, and redosable genetic medicines to more-sustainable farming practices — are planted when we untether our thinking from dogma and “reasonableness” to imagine alternative futures proposed as often far-flung what-if questions.
Our ability to systematize breakthroughs suggests genius is not necessarily a requirement, and that the imagination from which these ideas are born is perhaps not as enigmatic as we have come to believe. Through artificial intelligence (AI) we have begun to augment tasks that require human logic, reasoning, and precision — from summarizing meeting notes to twisting strings of amino acids into protein structures. AI is adept at simulating vast scenarios and making complex connections between concepts — capabilities central to human imagination. Perhaps imagination, this mysterious product of the human mind, is not so elusive or even exclusive to our minds. What if we could engineer a creativity copilot to augment our imagination and creativity?
From cognition to computation
At what point does imagination emerge? Computer scientist and philosopher Judea Pearl’s Ladder of Causation offers a compelling framework to explore this emergence by providing structure to human and machine reasoning. His three-level hierarchy — association, intervention, and counterfactuals — illustrates how humans move from basic observation to the sophisticated counterfactual reasoning that defines imagination. The first level, association, involves recognizing patterns and correlations — a domain where AI excels through its ability to process vast amounts of data. The second level, intervention, is where imagination begins to emerge as we move beyond passive observation to relax prior beliefs and actively test scenarios, akin to the way we explore what-if questions at Flagship. Finally, the top rung, counterfactuals, is where imagination thrives, enabling us to envision alternate realities and outcomes that diverge from what has actually occurred. The challenge, then, is to recapitulate these features computationally.
How could we steer AI from the first tier (where it already excels) towards tier two and three? At a high level, we believe we can enable AI to increase the number of states it explores during a search to recognize patterns and potential correlations by expanding the range of possible outcomes. For instance, when AI systems like generative models are allowed to explore a broader sampling distribution, they can generate more diverse solutions, similar to how the human brain imagines various potential outcomes by considering different scenarios and timeframes.
There are several different ways in which this could be done. One common method is to adjust the "temperature" parameter of AI outputs, but we and others have been exploring interesting alternatives as well. Initially, such “relaxation” of prior beliefs should allow the AI to better simulate hypothetical scenarios to explore what-if questions, akin to experimenting with actions to predict their outcomes. As the sampling distribution widens, we expect that AI will increasingly emulate the human imagination process, allowing for diverse solutions that reflect human-like brainstorming.
We can push this idea even further to allow the model to distribute probabilities more evenly among possible outcomes. This enables less likely options to be considered more, thus enhancing the novelty of outputs beyond conventions. For example, when prompted with "The pizza was ... ," the model might now output "The pizza was a vibrant swirl of neon colors," rather than the more expected responses.
As the AI iterates and learns, it can update the embedding space, a matrix that maps related concepts closer together and dissimilar ones further apart. As the embedding space remodels, novel relationships emerge between previously distant concepts, mimicking the counterfactual reasoning and imaginative associations that spur innovation.
A feature, not a bug
Those more familiar with AI may have read the above with alarm, identifying methods that could force an algorithm toward incorrect, nonsensical, and entirely fabricated outputs that ape plausibility. We call such outputs “hallucinations,” derided as a major fault that underscores the unreliability of AI.
So-called hallucinations represent the inklings of valuable unconventional outputs that illustrate the potential for counterfactual reasoning: "What if the world were different?" When these outputs align along Pearl’s ladder, they could offer valuable unexpected combinations of concepts, rather than simply nonsense, that a human might not naturally consider, and they can do this at scale, generating thousands of potentially novel ideas. The challenge the becomes: How do we identify the diamonds in the rough?
In biotech, we already use AI at the association level to, for example, identify patterns in genetic data, such as correlations between specific gene expressions and disease resistance. We can also model the impact of altering genes at the intervention level, simulating edits or silencing sequences to predict enhanced resistance. But what could be mistaken as mere hallucinations at the counterfactual level could spark breakthroughs. For example, hallucinated ideas about synthetic genes or novel pathways could inspire innovations that transcend conventional biology, offering new directions for therapies or vaccines.
Just as the microbiologist Alexander Fleming recognized the potential in his contaminated bacterial cultures, following a line of reasoning that led to penicillin, our interpretation of hallucinated data is just as important as its generation. By embracing these peculiarities, we can discover unique solutions or pose scientific hypotheses that push the boundaries of traditional thinking. This creativity copilot could be particularly valuable where fresh perspectives and unorthodox approaches are key to breakthroughs, offering a form of artificial serendipity that enhances human capabilities.
Augmenting emergent discovery
Causal intelligence appears to be a defining trait that sets humans apart from other species as we currently understand them. It underpins our ability to imagine, innovate, and transform the world around us by expanding the frontiers of knowledge through the act of science — an act that has been historically human. We collect knowledge, recombine, extend, and verify it — steps for which AI is advancing to become proficient. We see the potential to integrate AI into our emergent discovery process to help generate the types of far-flung hypotheses that lead to breakthroughs in human health and sustainability.
Flagship is developing AI to aid both reason and imagination, enhancing our ability to embrace and mine complexity and move away from reductionist science that has limited biology for years. Artificial intelligence and augmented imagination are the two AIs we foresee enabling bigger leaps into the future.
Discover how Flagship is advancing the boundaries of AI through our Pioneering Intelligence Initiative.