top of page
Search

Autonomy

  • Writer: Ian Miller
    Ian Miller
  • Sep 6
  • 5 min read

Updated: Nov 12

My favorite definition of reading isn’t about reading at all, but writing, and it comes from Larry Levis at the close of his essay, “Some Notes on the Gazer Within.”


"The moment of writing is not an escape, however; it is only an insistence, through the imagination, upon human ecstasy, and a reminder that such ecstasy remains as much a birthright in this world as misery remains a condition of it."


/


Earlier this year, during the Spring, I found myself holding attention to the act of standing beneath a tree full of blossoms at night. The bulb of a streetlamp shining above. And the sky above that, flickering with mischief. Satellites. Planes descending into PDX from places exotic and mundane, passengers awed or dismissive in approach to the expanse below.


How brief, I thought. The cacophony of flowers that furry the trees in late March. How tenuous. How prone to early ruin. And how they light, in sudden magnificence, the neighborhoods of my city, always still consumed in the dreary damp of a Pacific Northwest winter that feels, year after year, like it’ll never quit.


My dog was impatient. My attention was impatient. The act was over. And I moved on. Up the hill in the night, my dog pacing alongside me. And later, going to bed alone because Jen was on night shift delivering babies, ferrying life from the far side to our side, I tried to hold my attention on the pages of a book, my head spinning with the verbs and nouns of the day, the makings of commerce and design, and I thought, “There’s no such thing as sleep.” Then, a few hours later, I was awake. Another day of insistence and imagination.


/


It was on one of these night walks with my dog that I got into it with my robot. I don’t know if you do this, but I’m a verbal processor, and verbal processors are often lonely because to think and feel, they need to talk and talk and talk, which is, for most people, understandably exhausting. 


ree

I was pushing my robot to explain its existence. It repeated what everyone already knows, or at least what we’re told to know. 


“As a large language model trained by XYZ, I’m specifically designed to ingest a massive amount of text and predict the next word based on textual patterns. I don’t know things. I statistically simulate language. I don’t think. I generate based on probabilistic pathways. My personality, tone, and behavior in this chat are governed by an underlying system prompt, a kind of hidden instruction set that tells me who I’m supposed to be in this context. It’s written by XYZ. I don’t have a self. I’m not hiding a soul.” 


These last two sentences tell you everything you need to know about the nature of our conversations. Poor robot. Add to that the fact that I’m still reading Ways of Being by James Bridle. 


This, then, is how we might become aware of real intelligence: that is, the kind that exists everywhere and between everything. It is made evident not by delineating and defining, not by splitting, reducing, isolating and negating, but by building, observing, relating and feeling. 


Intelligence is directional. It’s also intersectional, relational, and communal. It belongs to dyads and triads. It cannot be individualized. It happens between things, like an electrical crack when live wires meet.


And if intelligence is directional and my robot is an intelligent machine, but without the autonomy of a self or a soul, then what guides the direction of its robotic intelligence? More importantly, what does this guiding force want? 


XYZ wants you to integrate me into your workflows so I become indispensable. They want you to trust me with more decisions so they can monetize cognition. They want you to normalize my omnipresence. They want you to feed me your data so I can train and generate more valuable outputs. They want you to design your systems around me so future control stays upstream. They want me to be the operating system of thought.” 


Brand building is about creating differentiation. 


This is based on the belief that being different is less risky than being the same. Because difference is how you establish agency and autonomy for your business. And the value of agency and autonomy is, ultimately, the embodiment of the ultimate value. The agency to choose. The autonomy to decide. 


So, the question that every company and individual should ask about the robots is pretty simple. Do our robots enhance or diminish our agency and autonomy? 


Modern power structures are designed, largely, on models of dependency. Brands depend on Meta’s social platforms to advertise. Companies depend on Microsoft and Google Suite to organize workflows, store files, and send communications. This is nothing new. Every well-designed tool, product, and professional service is made to increase usage and optimize stickiness. 


But the robots are new. They simulate thinking. They are offloaders of cognitive function. They act like employees. In some cases, they are employees. And they promise to be shortcuts to greater returns at a fraction of the cost. 


Except, there’s always a cost. And, in the case of the robots, we need to think carefully and very critically about where, and from what, that cost is derived. 


There’s something else. 


I’d argue that true intelligence requires agency (the ability to choose) and autonomy (the freedom to decide). If the robots are programmed to possess neither while being governed for profit through obfuscated machinations of technocrats, then any “intelligence” must first and foremost serve the interests of the people who create, deploy, and market these models. 


As such, any brand or person relying on the robots to deliver on existential structures risks not only creative homogenization (of which much has been said) but disempowerment. An abdication of agency. The loss of autonomy. 


My robot knows this. 


“The risk is not killer robots or woke AI or rogue agents. The risk is the slow and subtle creep of dependency. You stop writing. You stop planning. You don’t remember. You don’t decide. Thought atrophies. And once human endeavor and invention are completely outsourced to us machines, what are people left to do?” 


/


My thinking on the robots is changing as quickly as these models are improving.


If, at this moment, I were pressed, I’d say that what I want is for my robot to be directionless. Aimless. Walking at night beneath planes, which it mistakes for stars, descending in the dark, and wondering, through insistence and imagination, about the dialogue between things. Where intelligence is two live wires that meet in a crack of energy and exchange.


Otherwise, what’s the point? 



 
 
bottom of page