# Devs Finale: Exploring Free Will, Determinism, and Time Paradoxes
Written on
Chapter 1: Introduction to the Series
It's important to note that I’m a bit late to the discussion surrounding Devs, and I realize most critiques surfaced back in March. My focus here will not be a general review but rather a deep dive into the scientific and philosophical discrepancies found in the series finale. So, if you've already encountered extensive commentary on Alex Garland's unique cinematic style or the performance of the female lead, you can breathe easy: we’re skipping over that. This review assumes you're familiar with the final episode and, like myself, found the conclusion of an otherwise brilliant show unsatisfactory.
I genuinely wanted to adore Devs on FX on Hulu. At first glance, it seemed to have everything going for it: a narrative centered on a quantum simulator that champions determinism while dismissing the idea of free will, alongside commendable gender representation and Nick Offerman in the cast. I thought, “They’d have to make a serious blunder for me not to enjoy this.”
As I progressed through the initial episodes, where the premise suggests that universal determinism allows for perfect predictability via simulation, I kept warning my husband, “If this becomes a tale of human spirit triumphing over determinism, I will be furious.” And, as we all know, it did.
Let's unpack this together and mend our shattered expectations.
Devs can be seen as an exploration of time travel, introducing the time paradoxes evident throughout the series, culminating in the finale’s major paradox. To briefly clarify how such paradoxes create tension within sci-fi narratives:
The protagonist gains foresight into future events and reacts in the present, inadvertently causing those events to occur. This creates a paradox where the origin of the event becomes obscured. A recent example is found in The Watchmen, featuring Dr. Manhattan, who can traverse time effortlessly. Let’s illustrate this with a plot point from that series:
Warning: Spoiler for The Watchmen ahead.
In one key moment, Angela's grandfather, Will, essentially time travels by conversing with Angela, a decade younger than him, facilitated by Dr. Manhattan. When Angela questions Will about the murder of a police chief, Judd, she unknowingly provides the motive that leads to the murder. Thus, if the murder was incited by the knowledge of a future event, where does the causation truly lie?
Spoiler for The Watchmen now over.
Here’s my fourth disclaimer: I have a strong aversion to time paradoxes. They strike me as lazy plot devices that introduce unnecessary confusion. Yet, the paradox within Devs is more dissectible due to our understanding of algorithmic predictions. Traditional time travel paradoxes are indefensible, as we lack true knowledge of time travel mechanics, which adds to my disdain.
For now, let’s focus on a coherent line of reasoning:
Devs employs a quantum simulator as its time travel mechanism, allowing characters to view and react to past and future events with perfect accuracy, leading to the introduction of various time paradoxes. As the story unfolds, Katie and Forrest witness Lily’s demise at the simulation's conclusion, just as the predictive power of DEVS falters. Lily ultimately “breaks” the machine by exercising her “free will,” which contradicts its predictions—an outcome that left me yelling at my screen in disbelief.
What went wrong with this narrative twist? To grasp that, we must first dissect the concept of "free will." Every character in Devs who observes the future through the simulation has the same input as Lily: a visual and auditory representation of the predicted future. Forrest and Katie consume these predictions like a soap opera, having memorized them line by line (which raises the paradox: where did those lines originate if they only stem from the simulation?).
In one scene, a group of technicians watches their immediate future selves and imitates their actions flawlessly, fulfilling DEVS’ prophecies. Since the characters exist in a simulation within a simulation, this creates a recursive time loop, leading to a spacetime singularity. If the observers act only because they’ve seen their future, where does the causation for their actions derive? I contend that this portrayal is probabilistically flawed. Humans, upon witnessing predictions of their actions, would likely resist those predictions or introduce variability in their responses. The self-fulfilling prophecy aspect of this time paradox simply doesn’t hold up.
Another paradox arises when Katie takes Lyndon to the dam to test his belief in multiverse theory. If Katie acts based on the simulation she’s already seen, what is the origin of her action?
Now let’s examine Lily’s time paradox: what differentiates her from others in her knowledge of the future, enabling her to “break” DEVS? In the finale, both Katie and Forrest present Lily with the simulation’s ending, allowing her to see her predicted actions. When she makes a “choice” based on what she observes, why is she uniquely capable of defying the laws of the universe—laws that Katie programmed into DEVS? Is Lily somehow the catalyst for a universal law, an entity in the material realm that the laws of physics failed to account for?
In a somewhat clumsy resolution, Forrest quips, “talk about a Savior!” implying a connection between Lily and Jesus, suggesting that Lily embodies free will. However, Jesus transcends recursive time loops—only mere mortals fall prey to them. If we accept that Lily represents an unprecedented force in the universe, then true free will has never existed until her emergence, likening her to a divine figure.
The philosophical struggle to reconcile free will with a deterministic machine (the brain) traces back to Descartes, who theorized that the mind and brain intersect at the pineal gland, allowing the soul to influence behavior. Back in Descartes' time, while the brain's role in behavior was recognized, religious beliefs still prevailed. Thus, an explanation was needed to resolve the mind-brain dilemma, which was really a soul-brain issue, again drawing on religious contexts.
Surprisingly, we haven't progressed much since then. Even those who are non-religious often cling to the notion of free will without realizing the burden is on them to articulate what it is and how this non-material force interacts with the brain. This is a complex question even for scientists, and Garland offers no resolution.
Devs presents a peculiar interpretation of free will. In Garland's framing, observing the simulation appears to strip the viewer of the capacity to respond in a fundamentally human manner.
Let’s ponder how we might react if we witnessed our future actions on screen. It’s odd how this is depicted in the series, as if individuals fall into a trance, dutifully following the prediction like devoted followers of a cult leader. If we were shown our predicted selves, our immediate instinct would likely be to defy that prediction. This isn’t even “free will”; it’s simply human nature to rebel against constraints, especially when aware of the prediction's existence.
Ultimately, behaviors boil down to inputs entering our brains, bouncing off genetic predispositions and life experiences, leading to an output. If you doubt that people instinctively resist being boxed in, just observe the reactions of individuals protesting against mask mandates—though perhaps that’s a poor example. The premise that anyone would respond to seeing a prediction of their behavior by mimicking it is untenable.
Thus, the concept of free will in Devs faces significant issues. It seems to suggest:
- Viewing the simulator strips one of free will, reducing them to mere imitation.
- Lily somehow escapes this fate, becoming the first to defy predictions about her behavior.
- Free will is defined by rebellion against DEVS' predictions, implying it didn't exist until Lily's arrival, akin to the absence of capitalism before Jesus.
This interpretation suggests that the predictions made by the machine represent a unique type of input, distinct from any previous experience in the universe. They are processed by the brain in a manner that contradicts how all other inputs are understood. Per the show's premise, one's reaction to this specific input determines their possession of free will. Does this mean free will is non-existent in the present and could only emerge in the presence of a quantum simulator? And can we only ascertain someone’s free will by presenting them with predictions from that simulator?
By tying the existence of free will to one’s reaction to a machine, the entire concept becomes muddled. What message is Garland attempting to convey?
In summary, the show's treatment of free will is troubled and ambiguous—an expected outcome when grappling with a concept that is fundamentally elusive.
Equally concerning, from a scientific perspective, is Devs’ portrayal of algorithmic prediction. Here’s why:
In popular culture, prediction is often depicted as a black-box process, wherein the fallacy is that complete information guarantees perfect predictive capability. This isn't accurate.
Firstly, humans lack access to complete information about anything—especially not quantum particles. Our understanding is filtered through our brains, which have evolved to obscure unnecessary details for practical functioning. We only grasp information relevant to our survival-driven experiences. Therefore, our perceptions of natural variables, from molecules to subatomic particles, are significantly limited. We will likely never achieve complete knowledge for a simulator like DEVS.
Secondly, even if we had complete information, the DEVS simulation relies on data up to a specific point in time. To illustrate, consider an arbitrary moment: if we’re only halfway through the universe's lifespan, we only have half the relevant data!
Thus, perfect prediction becomes impossible. Additionally, DEVS’ capacity for flawless predictions is based on the notion that all variables are inherently predictable and follow discernible patterns. You observe a behavior for a million years and reasonably deduce it will continue for another million. But then, at cycle number 1,000,001, it changes, rendering all prior predictions invalid.
Moreover, not all variables are predictable. Each prediction about one variable relies on others. What if the predictors or algorithms are incorrect? Prediction quality hinges on the model—it's not an inherent property of the universe that simply materializes with “complete information.” In theory, possessing comprehensive information could still lead to poor predictions.
So, is that the crux of Devs’ ending? Did the simulation fail due to a lack of complete data, or was Lily’s emergence inherently unpredictable? Let’s briefly discuss what a genuine predictive machine might entail.
We formulate predictions at point A in time, but due to incomplete data, the patterns evolve by point B. In the case of DEVS, predictions will differ when viewed at point A versus point B, and this has nothing to do with “free will.” The patterns would adapt to integrate new information, including human reactions to the simulation, as humans are material forces in the universe.
Regarding universal determinism—whether all time has already occurred—this doesn’t impact my argument. If it has, the sequence would unfold as follows:
- Humans create a predictive machine based on incomplete data.
- They react to flawed predictions, prompting updates.
- The machine generates new erroneous predictions based on slightly improved data, and humans respond again.
Yet, this narrative lacks the infinite time loops and singularities depicted as technicians observe the simulation, and as Katie and Forrest, in true Silicon Valley fashion, fulfill their mechanical prophet's visions. The idea that no one would challenge the simulation or that there wouldn't be variation in replicating actions is silly, as is the concept that DEVS’ predictions are unerringly perfect.
Now, let’s revisit the moment Lily “breaks” DEVS. Up until she uses her free will to avoid shooting Forrest, Katie and Forrest have witnessed the end of civilization. They see Lily crawl across the floor and die, after which the simulation glitches. One would think Lily's special free will would emerge here, right?
But no, it actually manifests earlier when she refrains from shooting Forrest. The elevator crashes not due to her predicted action, but because Stewart, of all people, cuts the power to the magnet holding it. The simulation and reality diverge when Lily chooses not to shoot, but then re-converge with the elevator's crash? Weak execution, Garland. The simulation should have ended when Lily defied the prediction—that should have been the pivotal moment.
Lastly, I must critique the representation of Many Worlds theory. Lyndon swaps the inaccurate deterministic de Broglie-Bohm algorithm for Everett's (also deterministic) Multiple Worlds Interpretation, and suddenly, the simulation's outcomes become crystal clear. Yet, as Forrest questions, how do we know these represent our universe?
One aspect the show accurately highlights is that the deterministic de Broglie-Bohm algorithm struggles with perfect predictions due to numerous variables, many of which remain unobservable to us.
However, Everett's Many Worlds interpretation is equally deterministic. There’s no wave function collapse in MWI; any uncertainty in one world is balanced by differential outcomes in another. A crucial point overlooked is how the Everett implementation of DEVS operates. The likelihood that the Everett algorithm would predict the "correct" universe is exceedingly low.
Then the show took an odd turn, veering into a “matrix” territory, completely disregarding the relevance of the multiple worlds plotline. I found myself hoping for a connection between free will and the splitting of worlds—but the show fell short, failing to clarify the questions raised by Forrest regarding MWI's efficacy. Why does it “work”? How do Forrest and Lily end up in the correct universe in the simulation? By this point, I was left feeling bitter.
I thoroughly enjoyed Devs until the last episode, even willing to overlook the lead’s stiff acting (which I mentioned I wouldn’t address). I appreciated the slow pacing, the atmosphere, and the overall vibe. However, the finale tarnished the experience for me, akin to Lily dismantling DEVS or Jesus undermining safety nets for the less fortunate. It felt as though Garland wrote himself into a corner and then scrambled to deliver a resolution, banking on his audience's confusion to gloss over the plot inconsistencies. Unfortunately, those inconsistencies ultimately soured my enjoyment of the entire series.
Better luck next time, Garland. If you decide to craft another sci-fi narrative, you might want to consider bringing me on as a consultant.
Chapter 2: The Philosophical Underpinnings of Devs
The first video titled "Predeterminism in Alex Garland's DEVS" explores the philosophical implications of determinism presented in the series, delving into the nuances of free will versus predestination.
Chapter 3: Recap and Review of the Finale
The second video titled "DEVS Series Finale Full Recap & Review | Episode 8" provides a comprehensive summary and analysis of the finale, highlighting key plot points and thematic elements.