initial conditions
Reading Permutation City and Anathem has me thinking about initial conditions and how, if deterministic, they evolve.
Permutation City plays off cellular automata. The Autoverse is a simulated world, with simplified chemistry, that can, through careful initial conditions, spawn a biome (I always think of Minecraft worlds).
Both books make me think I should read and learn more about chemistry and biology. Before it had seemed pointless (too harsh a word) to me in the sense that I'd never do any work in those fields that impacted things. Now, I think mostly under the influence of Anathem (also maybe some of the Waking Up guided meditations?) I think that knowledge could enrich my understanding of the world around me - that that could be a worthy goal in itself.
And maybe also advances in AI mean nearly everything I learn will be to learn for my own understanding, rather than with the goal to move the ball forward. But also understanding can be something beautiful and interesting in its own right. It is a loss maybe - but i think I'm coming closer to seeing it as a refocusing that could be correct in itself - maybe focusing more on my own understanding and subjective experience and the world around me, including tying that in to more theoretical understanding, was always the right way to approach things. And the idea of coming up with some novel human-computer interaction to contribute was always a distraction.
One thing I appreciate in Anathem is the focus on the value of getting to the truth - not the instrumental value but just because knowing the truth in itself is good, or right, or something. I always think of "stable ground" that is the feeling when I have a comprehensive and confident understanding of a piece - that I'm on stable ground and can look around and examine for my next move.
And searching for stable ground is a search for the right primitives (kind of? could be more interesting distinctions made here). Are initial conditions primitives? Not really - but connected. In Anathem the primitives of life (physics, chemistry) grow out of the initial conditions that come through the big bang. Those are like seed values of a world. It's fun how both Permutation and Anathem have video game influences in how they think about the world - specifically in how games simulate the world - with ticks.
I still think about the ambition of Permutation's autoverse being beyond anything we're trying to do in terms of AI. On a computation basis.
Oh! And that is another common point between them, Permutation has this configuration of dust idea, where everything could be superimposed on each other (because computation can be spread out, I think? I never made it to stable ground on that concept). So Permutation City is an idea that gets run and then cut off - but because it's rules our deterministically expanding it continues to grow - I think it's something like the quantum event stuff in Anathem. That there is a universe where the simulation stops means there's another one, superimposed on that, where it doesn't?
In Anathem (so far) the multiple worlds is based on the idea that they can only proceed with plausible events - that the paths are limited by something like primitives or initial conditions. The laws of physics can't be broken. That still leaves an unfathomable amount of branching, but not not as many if implausible events had to be included. And it also means you can roll things back to where they converge in a sensible way. Our histories are sensical. I get the feeling this will be played with by the end of the book. Still, fun.
In relation to thinking about computers (including the Operating Systems: Three Easy Pieces book) is how unintuitively efficiency can behave in a computer compared to the metaphors we place on it from real life. This is a fascinating thread to follow - i want to do more thinking. Because computers are of course built in, and follow, the laws of physics as electrical signals, but they get abstracted away - but they still hold traces, too. And the world and rules that govern them are alien but not alien as completely separate - alien as grown out of our own - the world we interact with today and have naive knowledge of our interactions that can interact and match with our theoretical understanding.
And where does this leave us with AI? Probably for another writing session. But I have been thinking in my programming lately - which has moved to prompting an AI to generate code for me - that I am thinking in primitives - maybe even more in primitives - about what primitives the agent needs to succeed. And what primitives it needs that we could also have a joint view over - maybe the same view or more likely a similar view - so that we feel like we understand what the agent is up to - and is that then so we can reason about further steps to take? Instrumental? Or does it just make us feel better/right? And complicating that is that agents are mostly trained to be a simulation of us - but that raises the question, from Permutation, of at what level they're simulating us - the language level, right? Not the cellular level. But then the question is how much of our world is embedded in the way we have written about it - and also in how we go about getting things done.
Moving away from the algorithm: I've been enjoying letterboxd a lot lately, and thinking about how rich the world of available films looks jumping across lists, directors, and especially people's favorites. Compared to the home pages of Netflix or Prime it feels much richer, less flat. Even before where I'd try and go through something like classics or 'highly rated' sections of Netflix it felt like being stuck in a rut.
Similarly I've been enjoying my Spotify custom app where I add albums and then it shows me a random set of six to play from. I think it's a good mix of intentionality and serendipity. I'll often go through an online list or remember an artist I want to add and do that through the admin interface. It's not necessarily that I want to listen to that album right now, maybe it's not the right mood, but I like being able to add it to a list where I know I'll see it again some day in the future.
Thinking about time and compute. I liked this: https://petafloptimism.com/2026/03/14/gas-town-and-bullet-hell/, especially the bullet hell stuff - that feeling of soft focus absorption... I can imagine a world where that is managing agents, though it is pretty far away so far.
More echoes of time: reading Permutation City where copies uploaded to virtual reality have their ticks run depending on the global marketplace for compute. So you get time shear between them and the outside world. Reminded of the deliberate time shear of the Maths in Anathem.
Time diverging - but of course it already long has - every time an improvement in efficiency is made. Sometimes I think of course intellectual labor will follow the same path of automation as physical, why did we ever think it wouldn't?
Site-specific - in the later stage of his career Robert Irwin switched to exclusively doing site-specific installations. He would go study a space and come up with an intervention designed to make you feel the space more. To make you aware of it. There was a lot of variety to what he used - sometimes landscape adjustments, sometimes scrim fabric. It focused on material and placement.
I'm thinking about this (of course) in relationship to AI, and what AI can't do, or what it doesn't make sense to have it do. This is tied to my other post on phenomenology - and also to Sutton's critique of current AI approaches - that they're bootstrapped by written knowledge rather than built through experience. And also by having a toddler and watching her learn.
What AI does not do is sit in a room and think about how it makes them feel. It does not exist in a space or a body.
I've also been doing the Waking Up trial meditations where the past two meditations have focused on letting go of your concept of your head or the world and seeing what is in front of you, which reminded me of Irwin's book being seeing is forgetting the name of the thing one sees.
Maybe just cultivating the experience of being in the world and noticing feelings and noticing textures and being embodied is a good goal. That's what I see Irwin doing in the site-specific studies - and then there's the intervention. Is the intervention even really necessary (is it assertion of ego)? I'm not sure.
Intervention could be a way to have a conversation. You study a space, you intervene, another person comes into the space and experiences it and your intervention, they intervene themselves. That has a nice feel of being a conversation between the world and people. Textured, grounded.
That connects to some thoughts I've had about my relation to games. Sparked by Player of Games and Glass Bead Game. I think I have a game player's personality but I don't really play games. But part of my participation in web design and engineering has been interest in joining in a game - of starting from a set of materials and seeing how others use them and then making my own uses and seeing if this reverberate with others, or tracing my own influences.
Where now I feel like the pace of AI has disrupted that game, so there's less time to focus on a specific site or app design, because people have less bandwidth to regard it. But there's nothing stopping me from focusing and spending that time if I want... but it doesn't feel the same. Because the game and materials around it have changed. If I go up a meta-level into the design of the systems that produce the designs that helps. Though how many levels can you ultimately go up before all of the grit of specificity is lost?
Containers for thought - I've been doing experiments with book logging and album-focused music players. Partly because coding agents make dealing with spotify integration or a book database easier for me. Partly I think because in time where everything is changing it's nice to spend some time engaging with works of art that sustained attention and stay constant.
Of course circumstances change your relationship to a work of art, but at least the work of art remains a stable point for you to gauge those changes. There's something even more to it being a physical object. Knowing those pages contain that information.
I think having stable containers for thought will become even more important as things continue to change - information becomes even more abundant. They're touchstones to organize conversations around. Of course the downside of having thoughts inducted into being special objects is the gatekeeping - who decides on the special status. And that downside still seems real.
I wonder if it's better in some sense to spend time engaged with one specific or album or book then it is to be exposed to a bunch of similarly interesting ideas but on a more surface level. Saying that doesn't even seem controversial, actually. But what do you do knowing that? Do you make yourself power through a book you're not enjoying that much rather than switch to a new one? I guess it's just an 'optionality' thing. Probably no good way to really optimize it, better to pursue where you curiosity takes you. But good to think about the overall balance of social feeds to books you're taking in.
Thinking about phenomenology - studying your own subjective experience of the world - and how it might relate to a world of super-capable AI. Meditation can be a way of doing it.
One sign it may be interesting is it's something it doesn't really make sense to have AI do. They could synthesize from human accounts - but it feels clear to me that's missing the point. It's not about understanding your own subjectivity to achieve something in the world, it's more of a 'for the sake of it' thing. Making art is also often a 'for the sake of it' thing, but it also produces something, and so those goals can get confused since AI can produce art-shaped things too. But there's nothing produced with phenomenology - arguably writing - but even the writing is I think acknowledged as a hopeful guide or account - the real success of it can only be judged internally within the person.
I remember the philosophy from college but the last time I saw it really brought up was the Robert Irwin book - where he takes time out to really study the literature. His art being site-specific seems connected and also another thing that seems orthogonal (or close to it) from what AI does. Again focused not on condensing something down into a success criteria that can be achieved - but taking the time to observe a specific site and then make a specific intervention directed at other people's subjective experiences.
I'm collecting things that feel related to me - ritual, coziness. Partly the thought experiment is if AI becomes super capable of doing a bunch of work, what lies outside of work. Human relationships of course too.
I read "Turning the database inside out" by Martin Kleppman. I've also been thinking about better ways of providing concise context for LLMs. Both have to do with stream processing. Especially struck by the Kleppman description of state made up of derived data from an append-only log of immutable facts. Which I think is also how people are starting to do memory for agents.
For the agents to use that memory they then do an embedding (+ keyword?) search over the immutable facts and put them in context. Similarly you could have an LLM step that takes really a lot of the relevant facts and summarizes them and then feeds that summary as context to the LLM. Some questions of where you do the cutoff? Top 20 would work most of the time but maybe not always?
Fun to spot the similarities, anyway. I kind of want to experiment with a really transparent memory mechanism - maybe a constraint systems example where anybody can contribute a memory - where a memory is almost like a twitter post and then you can talk to an LLM that checks memory first... also showing which memories were used.
Relates to the fact that a log of immutable facts also has a lot in common with something like a person's social media feed. You might want to do the same sort of stream processing to something like a twitter feed.
The questions are again about which level to run it at. Do you do it on demand? Do you generate a summary and update it at regular intervals? How do you let the user modify it without it being re-overwritten on updates?
Working with embeddings - particularly with the goal of aiding thinking - a lot of the challenge is where do you draw the line for chunks. Paragraph-by-paragraph? Essay-length? Book-length?
Makes me appreciate books as containers for thoughts. Part of their purpose that you can use them as reference-able point for 'this set of ideas'. Maybe you'd like to decompose them down into their component ideas - but that introduces judgement on the model's part - embedding the whole book (or a generated summary of the book) is more stable.
With stable points in space you can rearrange - maybe interpolate. Although there again the clean picture of it to me is in 2D space, but embedding is 768 dimensions or more. Feels like you want to flatten to do some investigation then reinflate, then flatten to investigate another angle... But you lose some stable ground then probably.
I watched the Welch Labs video about the 'bitter lesson' and I think it shifted how I think about LLMs a bit.
The perspective of the bitter lesson is that we've taken a shortcut in bootstrapping models with all of our written data, rather than having them learn from pure experience of the world.
No individual human learns out of pure experience. But it you take humanity collectively than all of our writing has emerged from that experience. So if you look at it on that timeline of development, LLMs get to jump in after the hard work of deriving all of that knowledge has already been done.
It makes me feel prouder of humanity, I think? And more like LLMs are an outgrowth of work already done, rather than a new alien force.
This of course does not change or necessarily make more fair the effects LLMs have on society, that's an issue of policy and decisions we make now. Though I think it is a perspective that argues the benefits should be widely distributed since it is built upon a base we all put together collectively.
I've been thinking about monks and tech and attention. If one of the challenges of life today is managing attention in a world of content designed to take it (ref Chris Hayes Sirens Call), then monks stand out as example of group that willingly placed constraints on their life in order to manage attention.
(My idea of monks here is more from books/movies/TV than first-hand experience or even actual accounts - though quite a while ago I was really interested in actual monks and nuns and I want to revisit Thomas Merton writing soon.)
I'm especially thinking about monks in sci-fi - I recently read and enjoyed the Monk and Robot series of books, where a 'tea monk' operates in a society of abundance. Now I'm rereading Anathem which is overflowing with ideas about attention and modern life and what chosen limitations can offer. I also just listened to Player of Games which doesn't really have monks but the 'what do we do in a society of abundance' part feels connected in my head.
Of course I'm thinking about all of this in a context where AI might be able to both do a great deal of work for us (creating abundance) and also create dangerously addictive content that could easily take up all of our attention.
Dune also connects to some elements of this - the idea that they were at a certain point more technologically advanced and then pulled back.
For me this is also all connected to ideas of ritual - ritual as a method for reminding us of what's important - or helping us reach a state of mind where we can appreciate what is important.
Continued interest in meditation as one of our most powerful techniques for directing attention and perspective.
In some sense I think about this stuff because I think about what I want to make. Existing in digital firehose leaves me feeling scattered. Watching my toddler experiment with cause and effect in the physical world feels so much more stable, rooted in the world. Also cozier, human-scale, with constraints naturally flowing from the actual physical world.
So maybe bringing computation into physical interfaces is helpful. I recently made a huge sort of piano/appliance like container for my computer, and it's strange how it makes it feel more like a large machine - more like a place. Again it helps with grounding.
Or maybe I should focus on tech that gets you away from the screen and computer entirely, which could actually be a wide range of things (AI agent assistants fit in this category). But there's still parts of the computer experience I love, I feel like it's a push to better identify what those things are, and not to be too certain about what exact form they take, but hold onto the feelings I want use to leave you with - grounded, in the world, connected.
I read the Tim Berners-Lee book as well, which must be swimming somewhere in my head in all of this - the high-level protocol of the tech of the web (basically hyperlinks and URLs as pointers) which had to be invented, could have gone another way, and probably still haven't fully fulfilled their promise.