The algorithm of acceptance
On our swift embrace of artificial intelligence
I have been thinking about the way we talk about artificial intelligence, the way we have come to accept it so readily, so completely, that we no longer seem to notice its presence in our daily calculations. It is there when we wake, in the algorithm that determines which news we see first, which photographs appear in our feeds, which routes our phones suggest we take to work. It is there when we sleep, learning from our breathing patterns, our heart rates, the quality of our rest. We have invited it in with a casualness that would have seemed remarkable to us even five years ago and yet now we speak of it as we might speak of electricity or running water, as infrastructure, as given, as the unremarkable foundation upon which our lives rest.
This acceptance troubles me because the speed with which we have incorporated it into our most intimate moments suggests something about our contemporary relationship with certainty, with knowledge, with the very nature of understanding itself. Rather than displaying our utmost amazement about the technology behind it, we have embraced these systems without fully comprehending them and in doing so, we have revealed something essential about our current moment, about the way we navigate a technology dominated world while ignoring that it increasingly feels beyond our individual capacity to grasp.
The mathematics underlying artificial intelligence, the neural networks, the statistical learning, the optimization functions, operate according to principles that most of us cannot articulate, let alone challenge. Yet we trust these systems to make decisions about our credit, our employment, our romantic prospects, our medical care. We have developed what I can only describe as an algorithmic faith, a willingness to believe in processes we cannot see, understand or meaningfully question. It is a faith that mirrors, in some ways, the faith we once placed in institutions, in expertise, in the accumulated wisdom of human judgment. But where human institutions could be challenged, questioned, held accountable through recognizable political and social mechanisms, these new systems operate according to logics that remain largely opaque, even to their creators.
I find myself thinking of Joan Didion’s observation about the way we construct meaning from chaos, the way we impose narrative structure on events that resist such ordering. The rapid adoption of artificial intelligence represents, perhaps, our latest attempt to impose order on the overwhelming complexity of contemporary life. We have handed over the burden of decision-making to systems that promise efficiency, optimization, personalization. We no longer need to choose which restaurant to visit or which route to take home; the algorithm chooses for us, based on data points we have provided, often without conscious awareness that we were providing them.
This delegation of choice represents more than mere convenience. It reflects a fundamental shift in how we understand agency, responsibility and the nature of rational decision-making itself. When we allow an algorithm to determine our options, we are essentially accepting that the system’s conception of optimization aligns with our own values, our own priorities, our own understanding of what constitutes a good life. We trust that the data we have provided, our clicks, our purchases, our movements through digital and physical space, accurately represents our deepest preferences and desires.
But what if this trust is misplaced? What if the systems we have so readily adopted operate according to assumptions that conflict with our actual values, our genuine needs, our complex and often contradictory human nature? The troubling aspect of our algorithmic acceptance is not necessarily that these systems make poor decisions, but that we have largely abdicated our responsibility to evaluate whether their decisions serve our interests at all.
Artificial intelligence has transformed our relationship with information itself. We no longer seek out news or knowledge in the deliberate, intentional way that previous generations might have approached a newspaper or encyclopaedia. Instead, information finds us, curated by algorithms that determine what we need to know based on what they calculate we want to hear. This creates what researchers call filter bubbles or echo chambers, but I think the phenomenon is more subtle and more profound than these terms suggest. More than merely receiving biased information are we losing the capacity to distinguish between information that has been selected for us and information that we have actively chosen to seek.
The implications extend beyond individual choice to collective understanding. When algorithms determine what information each of us receives, they might in a larger part than we realise shape our individual worldviews as well as our capacity for shared discourse. We begin to inhabit separate informational universes, each optimized for engagement, for retention, for the particular psychological profile that the system has constructed from our digital behaviour. The result is obviously political polarization, which is certainly an important consequence, but even more worrying is a fundamental fragmentation of shared reality.
This fragmentation becomes particularly troubling when we look at the role of artificial intelligence in shaping our most intimate relationships. Dating applications use algorithms to determine compatibility, social media platforms curate our social connections, recommendation systems suggest what we should watch or read, whom we should follow, whom we should trust, whom we should love. We have allowed these systems to mediate our most fundamental human connections, often without recognizing the extent to which they shape the possibilities available to us.
The speed of this transformation suggests something about our contemporary moment that goes beyond mere technological adoption. We live in an era of overwhelming complexity, where the systems that govern our lives, economic, political, social, technological, operate at scales and according to logics that exceed individual comprehension. Artificial intelligence offers the promise of managing this complexity on our behalf, of optimizing our choices without requiring us to understand the full implications of those choices.
But this promise comes with a cost. When we delegate decision-making to systems we do not understand, we risk losing our agency and our capacity for genuine choice itself. We begin to mistake optimization for wisdom, efficiency for value, personalization for authentic self-knowledge. We become, in effect, optimized versions of ourselves, shaped by feedback loops that reinforce certain behaviour while discouraging others, all according to logics that remain hidden from our view.
The mathematical foundations of these systems, the statistical models, the probability distributions, the optimization algorithms, operate according to assumptions about human behaviour, about value, about the nature of rational choice that are rarely made explicit. These assumptions become embedded in the systems we use daily, shaping our options and influencing our decisions in ways that we may not recognize or understand. We have, in effect, encoded certain philosophical commitments about human nature and social organization into the infrastructure of our daily lives.
This is where my background in intuitionistic mathematics becomes relevant. Intuitionistic mathematics challenges classical assumptions about truth, proof and the nature of mathematical objects. It insists that mathematical statements should be understood in terms of constructive processes rather than abstract relationships, that truth should be grounded in our actual capacity to demonstrate and verify rather than in correspondence to some independent reality. Applied to our current moment, an intuitionistic perspective might suggest that we should avoid evaluating artificial intelligence systems according to their claimed capabilities or theoretical foundations and judge then according to their actual effects on human flourishing, human agency, human understanding.
Such an evaluation would require us to slow down, to resist the momentum of technological adoption, to ask difficult questions about what we actually want from these systems and whether they are delivering. It would require us to develop new forms of literacy, new capacities for evaluation and critique that match the sophistication of the systems we are using. Most importantly, it would require us to maintain our capacity for genuine choice, for authentic decision-making, for the kind of thoughtful deliberation that artificial intelligence promises to make unnecessary.
The challenge we face is technological as well as fundamentally philosophical. We must decide what we want to preserve of human agency, human judgment, human understanding in an era of increasingly sophisticated artificial intelligence. We must resist the temptation to mistake efficiency for wisdom, convenience for value, optimization for genuine improvement. We must insist on our right to understand, to question, to choose, even when such understanding requires effort, such questioning creates inconvenience, such choosing demands time and attention we might prefer to spend elsewhere.
The rapid acceptance of artificial intelligence reveals something essential about our contemporary moment: our deep hunger for systems that can manage complexity on our behalf, our willingness to trade understanding for convenience, our faith that optimization serves our genuine interests. But it also reveals our capacity for adaptation, for innovation, for creating new forms of human flourishing even in the midst of profound technological transformation.
The question that remains is whether we can maintain our humanity while embracing these new tools, whether we can preserve what is most valuable about human judgment while benefiting from the remarkable capabilities of artificial intelligence. The answer will depend on our collective willingness to engage thoughtfully, critically and deliberately with the choices they present to us. It will depend on our capacity to remain human in an age of increasingly sophisticated machines.

