Problems of Self 



Our work, exemplified by the projects in this website, involves problems of artificial intelligence, effective interaction design, and protocols for coordination and collaboration. All of these are interesting and important, and all are secondary to us. What matters to us is that, across many dimensions of our personal and professional lives, our relationship with technology is a source of Problems of Self:anxiety, depression, hopelessness, loneliness and addiction. As we iterate endlessly on the tangible problems above, it feels like software and its creators spend very little time thinking about these Problems of Self.

Our principal motivation is a simple, selfish one: we hate their impact on our well-being. Hate our fractured attention and memory. Hate the feeling that our ability to relax and work seems inexorably bound up within them. Hate the way that most of our work and much of our lives is spent in pursuit of such poor facsimiles of connection and contentment. We despair that, as we have become masters at solving problems of technology, we have fostered crippling problems of self.

Whatever we build, we want it to be based on a philosophy of self — a conviction that the well-being of the user is as important as their efficient engagement with the system. Now, more than ever, such a philosophy is crucial because the problems of technology that we face have become existential threats in that they are existential to the basic human questions of: who are we, what are we here for?

Fueling economic growth, combating social instability, managing artificial intelligence — all require efficient, scalable solutions at a speed never before attempted. We worry that, in the face of all of this chaos, we will wring every drop of efficiency and scalability out of the tools and frameworks that we have built and ignore the fact that the solutions that we need are, at their heart, fundamentally human ones: systems built through introspection, coordination and creativity. We are not systematically fostering introspection, encouraging connection, stimulating creativity. Worse, we are sabotaging our ability to do so in the future by eroding at our senses of purpose with tedious, energy-sapping workflows and systems. Effective systems for self-reflection and connection require devoted conviction from both their creators and their users, and it is much, much harder for depressed, isolated and content-addicted people to challenge the frameworks imposed on their lives.

Our goal with this document is to outline the beginnings of one such framework for software: a set of principles for building technology that is centered around these problems of self. We start by delving into the philosophies and technologies of modern software that contribute to our problems of self. We then outline a set of inspirations and principles that might form the foundation of a response to these issues, and finally present a set of design primitives and workflows that reflect our nascent principles. Beyond all else, our goal is to push developers, creators, funders, regulators and users to reflect on and challenge systems in our lives that sacrifice our well-being for the sake of goals that they have deemed more important.


Philosophies of Modern Software: Efficient, Unopinionated & Automated




Efficient Software


Efficiency is the core philosophy of modern software, and so it’s a good place to start when understanding its impact on our lives. This is true for every level of abstraction: our infrastructure coordinates billions of requests a second; our design language has been codified to the point where it’s instantly comprehensible to most of the world; our workflows have been streamlined to maximize accessibility and speed of completion.

Much of this is completely essential — sluggish, confusing applications accomplish nothing. The issue is that this focus on efficiency has extended beyond our architecture into our interaction paradigms. I like to think of software as an abstraction layer over some goal in the real world — chatting to others, organizing information, performing analytical reasoning, relaxing. A philosophy of efficiency, distorted and misapplied, pushes us to achieve these goals as quickly and effectively as possible. The impact of this can be seen everywhere: communication is effortless, knowledge structured and accessible, consumption perfectly tailored to individual preference. In isolation, most workflows — professional and personal — have become easier to perform.

Unfortunately, humans are not efficient creatures. We do not laugh efficiently, cry efficiently, share a moment efficiently. We rarely grow in a way that feels efficient. We live and work best in an environment that is fun and challenging and full of emotion. None of this is efficient, not when viewed from the perspective of a single workflow.

It is the dissonance between our inefficient natures and our efficient software that underlies many of the worst problems stemming from technology. Our loss of connection is, at least in part, due to the increased efficiency of our communication — comments replacing conversations, reactions replacing reactions. Our fractured attention results from thousands of pieces of software optimizing for speed of interaction and engagement in each moment, creating a squawking mess of stimulation and distraction. Much of our dissatisfaction with work stems from spending each day working with software that supports us only as much as is required to maximize our output and which encourages us to treat people the same way. We feel this as depression, as isolation, as hatred. We stare at a system that ruthlessly selects for efficiency with no thought as to what manner of efficiency actually matters, and it stares back at us, and we hate ourselves. And then we stop staring, because self-hatred is an awful, shriveled feeling, and we turn to some other system and ask it to stop us from feeling this way, because, for all of its flaws, we cannot deny how efficiently it numbs our pain.

This matters now more than ever because more and more of our lives are presented through the abstraction layer of software. When this layer, which was never designed to replace the workflows that it’s abstracting, becomes our primary mechanism for interaction at home, at work, and in our liminal spaces, its values become our own.

The long-term solution to the problems caused by efficiency cannot revolve around artificially creating an inefficient world. Turning off your phone, bricking your laptop — these might work for individuals for short periods, but the level of disengagement required for physiological and social recovery is the prerogative of the rich, and more to the point, cannot easily be done in isolation. People are the anchor for better modes of thinking, and like it or not, our people are very often on screens.

In summary, software must adopt a better north-star than efficiency. We need applications to balance efficient output with the wellbeing of the user, both because it is essential for fostering a more positive relationship with technology, and because people work and relax better when those flows aren’t actively destructive to their health. Part of that involve less time spent with technology, but a truly effective response must center around a reimagination of our time with our screens.



Responding to Market Dynamics of Efficiency


Even without malicious intent, efficiency and scalability cyclically influence each other — in order to be scalable, companies must become more efficient; and scale tends to force companies to think and act more efficiently about their users, building abstract models of people (homo economicus; homo consumerus; homo technologicus) to optimize software at scale.

This means in a system oriented towards growth, efficiency is not merely the product of norms and design philosophies, but market incentives. Optimizing for moment-to-moment efficiency creates market dynamics that push companies towards profitability and growth, making it challenging to balance with the need for humane interfaces and systems.

At worst, there are those companies who present themselves as the most efficient solutions to the problems that they themselves create. Binging content leaves us exhausted, which in turn draws us to Youtube, which has been optimized until it has become the most efficient way of combating exhaustion (in the short term). The need for validation draws us into more social behavior, which has been made enticingly efficient by Instagram and Facebook. Difficulties at work lead us to spend more and more time at home interacting with software abstractions of our colleagues, which make the difficulties of navigating personal relationships more intense and less worthwhile. These companies aren’t even necessarily evil. In the absence of values-based software, they’re simply solving one of the many problems that we face, which just so happens to sit so low in our cognition that a solution manifests as addiction rather than self-improvement.

A philosophy of self has several responses to this. First, moment-to-moment efficiency is a flawed north-star even from the perspective of maximal efficiency, because soulless, highly-efficient applications cripple longer term performance. To put it simply, bored, uninspired people do not work quickly and effectively over time, even if their tools support it.

Second, consumers are not the rational automata that we sometimes model them as. A more humane interface (a more fun/joyful/meditative/beautiful/resonant/moving/captivating/… interface) is a powerful differentiator in a market centered around nested panes of white glass.

Third, efficient workflows are not generalizable as we think. People think and work differently, which means there is no single flow that works for an entire userbase — efficient work is different for many people. Another approach to generalizable software centers around grounding it on fundamental aspects of human nature — feelings of curiosity, challenge, growth — and providing the flexibility to engage with tasks in many ways based on those experiences. We can leverage the flexibility of expressive interfaces and the personalization and orchestration capabilities of intelligent systems to support this at scale.



Software as Sentient System


One of the design principles that emerges from this preoccupation with efficiency is that applications are tools designed to support predetermined workflows. The implication here is that it’s not their role to help you reflect on why you’re performing a task and whether it’s worth doing in the first place. A tool does not force the hand, yes, but it also does not challenge, does not foster introspection, does not consider the broader wellbeing of its user.

A drill, for example, might include an instruction manual on how to use it correctly, and it might have affordances (rubber grips, tabs, markings, and so on) that help you understand how to use it, but it will never tell you to reflect on whether it’s worth using in a particular circumstance. Similarly, Slack has a number of helpful tours that illustrate its various features, and it uses intelligent notifications and visual partitioning to help you figure out what to pay attention to, but there is no expectation that it manages your time or attention in any way to improve your wellbeing.

This is damaging for two reasons.

First, without a philosophy of engagement, capitalist incentives take hold, and their answers to the questions of when and how much you should use an app are “as often as possible” and “as much as possible”, respectively. Replicated across every piece of software that we use across our perennially-online lives, this approach fuels the epidemic of fractured attention and constant stress affecting many of us.

Second, there’s actually no such thing as not having a philosophy of engagement. Every algorithmic, aesthetic and systemic decision that you make as a creator informs that philosophy, intentionally or not. There are instead philosophies of engagement that are interpretable by the user, and those that are not. The perception that applications are just tools without established perspectives opens the door to anti-user (read: malicious) actors, who can freely shape your perspectives without any awareness of their influence, because the user expectation is that software does not control opinion. This is precisely how Instagram, Facebook, Tiktok and other forms of social media shape people’s beliefs: their users go in (sometimes subconsciously) assuming that the platforms are merely tools for them to pursue a workflow — relaxing, distracting themselves, learning, making themselves laugh — making it effortless for companies to dictate how users interact with their platform (”Watch divisive content.”, “Watch surface-level content.”, and so on).

In short, we need to re-establish the relationship between human and software as being and co-being, rather than as user and tool, and in doing so develop explicit philosophies of engagement for the applications that we build that are based around supporting general wellbeing as well as task efficiency.

As we discuss below, many of the solutions to these problems center around two interventions: developing algorithmic transparency and control, and building in systems that sometimes explicitly constrain the user.


Automation & AI in Software


In domains focused on coordination, logistics and knowledge generation, a drive towards automation is one of the obvious corollaries of a focus on efficiency: if the goal is to complete a task as quickly and effectively as possible, then having an intelligent system complete it instead would obviously be preferable. As our systems have become increasingly intelligent with the development of better artificial intelligence, this has become more and more of a reality, and it feels like every startup in the world is focused on leveraging intelligent systems to automate as much of our lives as possible.

This increase in efficiency is probably a good thing, but I am also deeply worried that it will merely exacerbate the problems of self that we face. Will building thicker abstraction layers over communication, collaboration and creativity really help us?

As we automate, we must also cultivate new sources of meaning in people’s lives. AI can be a powerful tool for that, but we have to think about it very differently. As we introduce in the sections below, one of the most powerful tools that artificial intelligence enables is the ability to dynamically inject play, and fun, and surprise, and narrative, and beauty, and meaning into workflows that would otherwise be boring and unfulfilling.

In short, we have to ask: after we’ve automated much of our life away, what do we replace it with? The attractor states right now — Instagram, Youtube, Tiktok — cannot be the answer.
︎ Nov-2023
practice that sustains is practice that sustains the self