the stack and the heap but the heap is an infinite canvas (is this something)?
the reverse chronological list and the infinite canvas.
One reason to make things is to develop a deeper relationship with the things you put into them.
When songs I put on mix CDs - particularly ones I thought hard about sequencing or mixing into another song - come on I'm transported back to when I was making them.
I just got an email from college where it showed the college logo and it brought me back to trying to find a place for that logo on the college band CDs we made.
And it wasn't something inherit to those songs or logos that fixed those memories for me - it was me trying to use them - where I had to get a more in-depth feel for them their shape and edges, as I tried to fit them into something else.
How do you handle image resolution and file size in a spatial canvas.
In a file browser image files all feel about the same, even though they can vary widely in resolution and size. When you drop them into a spatial canvas web app, that app has to decide how to size. The most 'honest' way would probably be to render them at the resolution of the files - but i think this would often be surprising to people - they won't understand why some images are twice the size of others.
So you want to normalize, probably pick a max size and scale big ones down to that - then you have to think about how you want to manage that for performance. Do you use canvas to resize it down 'for real'? That means if it was bigger you lose that extra resolution. What if the user later actually wants in that big? Probably you want to keep the original size in case needed - but then how do you indicate to the user that some of the images on the canvas could size up without degradation and some would look rough. You list the resolution of course - but if you're a user working on arranging things it feels like its forcing you to switch between this spatial flow to think through math resolution.
You could show a red outline or something when something gets resized above resolution. But while that makes sense in isolation, in the context of an app with a bunch of other things going on I think it could be confusing noise. Plus sometimes if it's just getting scaled up a little it's probably fine - and you don't want to create the feeling of a problem where there isn't one.
My impulse is always to find a way to give the user information more directly so they can then build intuition around it. But it's hard in this one.
You almost want to indicate heaviness or something like that for big files.
It's a knotty problem but that's also probably means there's some fun solutions (or at least experiments) to be done with it.
At heart a question of whether it should be "don't make the user think about it" or "give the user enough info to think about it clearly". Is it real complexity or incidental complexity. And of course this partly dependes on where the output is headed... to be printed? To be imported into another program?
I think I pinpointed a misgiving I have about general collaborative LLM use: that the LLM will never stop and say "I don't know" or "I don't have this mental model, can you help me understand it". That's the fundamental limitation of being trained to plausibly (not correctly) generate text.
I guess it gets fuzzier with the reinforcement learning layered on top, right? But can you really reinforcement train it to say "I don't know"? You can train it to say "I don't know" based on patterns of when it's training data shows people saying "I don't know". But it's against its grain, right?
If I had a collaborator who never said "I don't know" that would be a bad collaborator - I'd feel like I never had a steady foundation with them. Sometimes this does kind of happen because people feel embarrassed not to know things - they may commit code they don't understand and not want to admit it. But I feel like a key part of me getting to know someone and working with them is getting past that to where we can talk about what doesn't make sense - because a solid, shared mental model benefits us both.
I definitely use LLMs to speed up my development, but mostly in the form of autosuggest, which to me feels like the most 'honest' or 'with the grain' format for it. I'll also have it generate more generic components or bash scripts or database logic. But even then I feel like I really want to keep thinking of it as a form of 'codegen', of 'predictive text', and not a collaborator.
I don't know the consequences of all this - maybe accuracy crosses a threshold where the hallucination tradeoff is negligible. Maybe grounding in documentation helps a lot. And possibility you can do some chaining of an evaluator model... Ideally I think you'd get a confidence or 'novelty' score from the model itself, that you'd learn to factor into your intuition about how to treat what it's generated, but I don't see a lot of progress or talk about that lately.
Maybe this is just an argument that there's some metaphor beyond chat is called for. Something that signals its going to 'run with idea' or 'play it out', not 'answer your question definitively'.
One of my favorite pieces of writing is The Web's Grain by Frank Chimero https://frankchimero.com/blog/2015/the-webs-grain/. Lately I've been trying to think about what the grain of LLM-related development could be.
There's the interface for using the LLM, where Cursor has done a lot of work. But I want to think about what kind of websites and software you can build with LLM assistance.
There's using LLMs as a piece of the pipeline - unstructured data into a JSON schema, for example. There are possibilities there, I think. The dilemma is probably giving the user a mental model of what they can do - LLMs can unlock a new level of fuzzy input, but will it be clear to the user where and how that input breaks down? Will they run to the edges and then be frustrated there?
Then there's codegen and vibe-coding. This is such a knotty feedback-loop. It's safe to say LLMs are good at boilerplate, particularly the most popular project boilerplate. So one approach to going with the grain could be picking a stable tech stack and coding according to convention.
A really interesting piece of this is how libraries fit in. I think a lot of people's favorite vibe-coded examples use libraries like three.js or matter.js, to simulate worlds or physics. I think in general there's an imprecise linking of a model's 'intelligence' and physics sims, where we imagine a model has an understanding of the world, but what it actually has is ability to hook into a carefully designed library intended to simulate the world.
But I do love physics sims, and I think there is some energy there where we've always wanted physics sims in our software, but the complexity and the clash between digital logic and physical world logic has been to much to contain. Maybe LLMs can be the glue that helps us meld those two together.
Another place LLMs have been useful for me is approaching FFMPEG syntax commands. Another powerful library case where it allows me to dip into the powerful capabilities there and dip out without having to puzzle over syntax.
There's definitely a flow where you mainly use LLMs for boilerplate and for tech adjacent to your main focus. I've done that on this blog with most of the database logic being codegen. I can see a near-term future where my focus is experimental UIs, stuff that can't be easily generated, and I lean on LLMs to generate the surrounding tech.
But I don't think that's exactly approaching LLMs as Chimero approaches websites in the article - it's not looking at what 'LLMs' want to make. For that I don't have an answer currently beyond popular web stacks and this interesting relationship to libraries. Going to keep thinking, though.
I've been working on building rough shelves out of 2x4s in our basement. Partly for storage, partly to get more of a feel for how to frame things with 2x4s. The big improvement on the second set was using 3" deck screws. The process made me think about learning and how it feels different to have solved a problem versus avoiding having a problem.
On the first set I had just got some long phillips head screws from the hardware store, but they kept stripping out. I had some idea that construction screws used other drive heads - and it became clear that this was probably the reason.
I switched to star head screws and those drove better, if still a little rough. Then I watched this https://www.youtube.com/watch?v=SG6FJCWYeRU video and followed the advice to get 3" deck screws.
For the second set of shelves I used the deck screws, they drove easier, and the space at the top of the screw meant the two 2x4s pulled tight.
Thinking about learning: I could have got all this information at the start, but it feels so much more real to have gone through the process of encountering the problem - really felt that problem - and then the difference of the solution. I love that process.
I think that sort of feeling is a key part of building intuition.
Another interesting puzzle for me is why intuition is hard to pass on - having experienced this issue could I now tell someone else that the secret to building 2x4 shelves is to use deck screws? I would probably mention it. But I think the tricky part is there's lots of other parts to building shelves, which the person I'm talking to might know more or less about. It's a huge set of information. I (and I think a lot of people) need to be in the process and solving problems step-by-step versus being given a giant checklist at the start. At least that's how I enjoy doing it.
I read (listened) to Nick Harkaway's Karla's Choice recently and then went back to the BBC audio versions of a bunch of the George Smiley novels. It's fascinating Harkaway can write Smiley so well when the tone of his own novels (which I also like) are so different.
I'm not sure why I like the Smiley novels so much. They're always weirder than I expected -- they are kind of plotty in a detective way -- but really they're most often weirder and sadder. I think I always like an emotionally repressed person who occasionally says something that reveals what all is going on unsaid. I guess that's English-associated. I always think of Remains of the Day as the big example though it's been a long time since I read and watched that.
The le Carre book I think about the most is A Perfect Spy, which I think I got because someone recommended it as his most literary work. It was tough to get through at times -- just because of the bad things piling up -- the indications things were going to a terrible end.
The thing it was best at was demonstrating how weird it is to want to be a spy. How broken of a person you have to be. Interesting to think about in relationship to how much media we have about spies. In this case the character's broken by his relationship to his con-man father. By needing to take care of him. By learning from him even where he didn't want to. There's an unsparing-ness to it that I always find impressive -- like he's willing to see things as they are. (Though that's a fine line -- some art veers towards "if it's the rougher view it's the true one" which is not.)
scrawl is back. using waku RSC based on the feed rebuild.
Shower idea: spreadsheet style image editor. (Also very much inspired by the way canvas works.)
There would be a source image and a destination image (maybe call the destination image canvas?). Break the image up into 16x16 cells and stretch image to fit. Then you've got a selector, that can expand to different cell sizes, and can copy-paste paint from the source image to the canvas.
So for that you'd need to be able to controld the selector, resize it, and change the source cells (while the destination cells stay the same).
You might also need layers to keep things non-destructive.
Actually you might want two modes of movement on the canvas, one where the source moves with it, and one were the source stays static. All of these should be instituted like live formulas from excel. Although you wouldn't be able to do a mutating feedback loop exactly, because your original is kept in original condition and you're always linking from there -- unless you also have the option to link from your destination. THat might be fun.
You have two different color outline to indicate source and destination. That's actually quite a few combinations of modes - might take a while to get controls that feel good.
I thought you would want to be able to change resolution (up and down). But actually it might be better to be able to zoom by factors of 2, like xzoom does. Depends which project this is a part of.
It would definitely be fun to show the source and destination info at the bottom in text as an excel type formula.
Something I've remembered for a long time, is the description, Chris Langan, who supposedly had the highest IQ in the world, of the kind of thinking he does for pleasure (as told in a First Person episode). He talks about imagining the universe, and all the forces at play within it. I recognized the pleasure of some of that. One of my main pleasures from design/programming is holding a system in my head, and imagining how different adjustments would ripple out through the system. Often it is problem solving: 'how can I better balance this system', but not always.
I have been thinking about that pleasure, and what is at the core of it in relation to mental models. It seems connected to so many other descriptions but I still feel like I have a fuzzy picture of it. It seems connected to the pleasure of physicists, coming up with an elegant explanation for the physical world (and I have read some discussion of what elegant actually means: to explain the most with the least, although there are tons of judgements within that). It also seems connected to the pleasures of worldbuiling, of telling a believable (recognizable) fictional story. Because, I think, being able to do that implies you have the model of the real world in your head, or of a system of relationships between people.
Yesterday I was reading Color and Light, about how painters learned to paint the effects of light. From that, I got a picture of painting physical scenes as your ability to hold a model of the affects of light in your head, and then to apply those to the fictional characters and objects you put in the scene. Interesting to compare that to present-day videogame rendering, where people need to build a model of the world into the game. In both cases I wonder how big of a piece the understanding of the systems in the world is, and how much of it is emulating those systems in your chosen medium (paint or computer graphics rendering), that's something I want to write more about.
How about flow state? Is holding things in your mind pretty much flow state? I don't think so, and they seem close enough that the effort to distinguish between them could be useful in understanding both. Flow state I think also implies that you have a model in your head (although it is implied it is more inutitive, less explicit?), that's how you're able to maintain flow state, I think. You can operate in a flow state because you have enough of a model that you can move confidently, without getting stuck in errors and having to look stuff up. A flow state is full engagement with the system. I don't think it means you have a complete understanding. Even now, just thinking about it I can conjure up this feeling of full engagement, of attention, almost like I'm balancing a system of physical objects.
The idea of a model you're trying to approach seems easy to relate to programming. What about art? What about just trying to create something beautiful? What is it there? In Color and Light, Gurney talks about painters trying to ocnvey human emotion through a landscape. Why use a landscape? Why not communicate more directly? Why use attention to the physical world as an intermediary? After that he showed some symbolist paintings, and looking at those I thought, well, maybe that is one reason, if you try to convey them directly (anger as a demon) it looks trite. That explains some of it I think. But it still doesn't explain to me why the landscape is a successful way of conveying emoltion.
I had another answer, which I think I can write well but still doesn't seem enough. hHat painting a landscape with the real attention to detail is an act of attention. Is it that attention is everything? Real engagement is everything? And that because we know something of how a painting is made, the labor involved, the successful painting is just a message 'this is worth paying attention to'. And then it could be anything? It seems to follow that it could be anything. And beautiful paintings have been made of anything. But that still seems like a dissatisfying answer in terms of how to make art, how to live. Maybe it is 'pay attention to things that resonate for you'. Then the question is, is the success because it also resonates for another person? Or because you successfully communcated your experience of the resonating?
Was this what I wanted to write about? Mostly. I wanted to think about holding a system in your head in relation to using tools. Because do I want to make tools you can hold in your head? What am I making tools for? How does ego come into all of this?