552 words

I want to try and sort / rearrange / explore some of my thoughts on why I make creative tools.

Sometimes I wonder why (or feel bad?) I don't use the tools more myself. Or I wonder what it means that I don't.

If I want to make it pointed I ask 'do I make tools because I want to be close to creativity but myself don't have something to say'?

I don't think that's true (or maybe a piece of it is). I think in general I like to make systems. I also get frustrated with existing tools - I want a portal into a place where tools look differently, work differently. I think that's worth something in itself, just to have alternate, considered tools - alternate coherent ones.

Coherency seems like a big thing here - I want to take a little branch off from where things are and build a little ways out on it ('slowly going out on a limb') into something a little new. But why I build it into something small and self-contained - not just a thought-experiment but a working tool - is I like the journey of building the rest of the system off those first branch assumptions. And also feeling what are essential divergences and what are distractions - because I do want to position it within our current world of interactions - usually.

I also enjoy the dialog with the material. I often start from an API or a capability, and I extend or frame that capability a certain way. In my favorites, I think I succeed at framing and hooking in a way that makes a capability feel different - or reveals that capability to the user - makes them think about that capability differently, grasp it more fully, and that echoes the discoveries I had in the making of the tool.

There's connections there to art - I think about James Turrell using windows to frame the sky. That's directing your attention. To say this experience can be as rich as anything else if you direct your attention to it - and the frame there helps you so much - if you try and pay attention overall to how the sky is changing you get caught up, overwhelmed, or - it washes out somehow - it's still something pretty, but it's this big category of sunset... With the frame, you can say, how does the light in this corner change over time. And you can look at the whole thing at once and let it wash over you in a good way - have a feeling for the whole, but feel like the whole is not overwhelming - it's framed for you, narrowed for you. Who knows how much of this feeling comes from us training ourselves to look at other things in frames...

So (stretching) I want to do things like frame how parts of a computer work, how our computers work, or more how media works on a computer. I do think about pixels all the time, and about re-stranging the fact that our screen is made of pixels, that all of the wondrous things we see on screens is always just this gridded row of pixels - how strange that is. What other possibilities are in that?

207 words

Looking at samplers and drum machine patterns and thinking about how their primitives map to other media.

First images - images on a computer are made of rgb pixels, pretty literally. Although the mechanism/simulation that leads to that final pixel result can be a lot of things.

A video is a sequence of images. It adds the element of time.

Audio also has that element of time as a necessary condition.

Most of my experiments output images, it's interesting to explore how time changes things.

Writing on a screen ends up displayed as pixels as well.

Images and writing both do arguably have a time element in how the viewer takes them in. That is entirely in the viewer's control - though a lot of conventions surround (arguably dictate it, if you don't read something in order you're kind of knowingly taking it into your own hands, image conventions are sneakier).

What about interactive generation and games? Do they fit into that set of things? I guess an image editor is an interactive generator as well...

How do these feel in relation to the world too? In the world you have to do some work to isolate something down to a single image or sequence of images...

264 words

I've been thinking about prediction. Mostly in the context of M being almost 2 years old and asking/demanding more things.

When she reminds me of my water bottle by pointing and saying "dada bottle" or wants to throw away a granola bar wrapper by pointing and saying "trash" it feels to me like a cross between a prediction and a request. She's happy to see the action taken because it fulfills her prediction. And if she can make accurate predictions about the world that means she understands the world. (This is all my speculation.)

I also try and predict things (in my head) to check my understanding of them. I tink that's a commonly recommended anxiety strategy to write down worries and then check in if your predictions came true (they almost always do not), and then try and adjust your predictions based off that.

I'm also both interested (and skeptical) of some of the ideas around prediction markets - I do think that bringing money into a prediction often shifts how people think about it. From something they want to happen or identify with to a more analytical attempt to figure out what will happen. And also a more real assessment of risk - a lot of things people predict are never revisited - you forget the ones that didn't happen and only remember the ones that did.

I think checking your predictions is a good way to see if you understand a system - you do have to also keep in mind probability and not get thrown off when improbable things happen.

467 words

Morning 10 minute freewrite scrawl. I want to return to the timer concept today. Build out the frontend with the iframe portals. Reduce data scope down to just startTime, duration, and label (leave calendar time blocks for later). I think the shuffle mechanism will be the real small proof of what shared data/structure could do for generated apps.

Remembered walking today that the two pieces should be separate - timer is in charge of displaying the apps but the iframe links could go anywhere. I'll probably have a convenience repo to start with. I wonder how much focus I should put on getting a live react playground working with it - I wonder if I could use one of the LLM canvas playgrounds to actually run it - with the share link - though how stable are those share links really?

Some of my recent work experience from submissions should come in handy - it's always surprising how the most mundane projects have a way of working themselves back in.

Ideas-wise also really curious about this guide to living structures by Ken Isaacs that was referenced on Merveilles. Have to get my printer connection on this laptop working and then I'll print it out.

Some of his other work reminded me about discussions about transparency layers. Think I should get some of those again - will also be useful for planning out tattoos.

I also had the related idea for a drawing/image editing app where each layer is an iframe. This follows up the work I'm doing with scrape. The worry with scrape is just that the layer business is overwhelming the scrape effect which is my main focus. Not the first time that's happened so of course I'm looking for a place to modularize or generalize it. Reminds me again of 'carving at the joints'. I want to do a better job of remembering key touchstone phrases - I should add carving at the joints to the quote page. It might be the one I think of most.

That got me thinking about the connection between frames and layers. Based partly on my other work on frame trying to isolate GIF and video frames. I can't tell if I have too many unfinished constraint systems projects right now or it's just the natural mess of working things out.

I really like combining - or playing with combining - frames and layers. Frames are time based, or sequential based? Layers have an order too. Playing between the two a practical problem is there's often probably just too many frames - if I explode them all there's going to be too little difference between them and too many to run. But FPS would be the natural solve for that right? Maybe even playing with the idea of keyframes.

288 words

I've been thinking about what makes an app or program feel coherent. Like they have an underlying logic.

I think our appreciation of coherency comes from interaction with the natural world, where coherency emerges out of the underlying physics.

Interfaces are not chained to physics but they use it as metaphor a lot.

I've always been drawn to idea that coherency in software apps should come from "going with the grain", sticking to how the underlying software underneath works. That maybe there's something there that could give a foundation like the underlying physics - though it's definitely a different shape of thing than physics is.

I do a lot of this with the image apps, where I try and use drawImage, because it feels like that is the underlying structure of computer drawing - copying bitmaps from one place to another. Because it's 'natural' it can also be done very fast, and that opens up interesting possibilities in software.

A program can feel coherent if it feels like it was made by someone with a point-of-view. Like they thought this is how this process should work. This is connected to the idea of coherency and as emerging from an underlying process, but it's not necessarily the same, I think?

I think about this in relation to music. What makes music feel powerful is if it feels like it comes from a point-of-view. It can take wildly different forms but I think that point-of-view is always what's interesting and compelling to me. Again this seems connected to what makes for interesting software, but it's not the same.

A point-of-view comes across from decisions made at different scales (big-picture and small-picture). Both should have influenced each other in the making.

72 words

how long (with all of the generative media) before the fact of a camera capturing only what is present before it becomes novel and more interesting again?

I guess it would be more like "the act of arranging the physical world to portray an image" which is then captured by the camera that is what will be interesting. and already is?

extremely fuzzy territory. but I want to keep thinking about it.

186 words

One of the big sticking points in my "react apps that use canvas" experiments keeps being "where does the canvas live". Most often I put it in the .tsx and use a ref to access it in functions and useEffects. It works but never feels great. If I have canvases that are never going to be rendered I can just create them where needed - caching behind a ref to make sure I don't end up creating each call (possibly it wouldn't matter performance-wise if I did).

Maybe I should sort of allocate and create all my canvases in the functional code and then also render "render" canvases in the UI that I draw the "real" ones too. But sometimes (like my current project) the number of canvases is dynamic - dependent on something else. In that case I need to update that dynamic number in both DOM and drawing code - having it in react state so it reliably triggers a DOM rerender helps, although sometimes I need to run drawing code without waiting for the state update to clear, leading to more helper refs...

23 words

the stack and the heap but the heap is an infinite canvas (is this something)?

the reverse chronological list and the infinite canvas.

133 words

One reason to make things is to develop a deeper relationship with the things you put into them.

When songs I put on mix CDs - particularly ones I thought hard about sequencing or mixing into another song - come on I'm transported back to when I was making them.

I just got an email from college where it showed the college logo and it brought me back to trying to find a place for that logo on the college band CDs we made.

And it wasn't something inherit to those songs or logos that fixed those memories for me - it was me trying to use them - where I had to get a more in-depth feel for them their shape and edges, as I tried to fit them into something else.

417 words

How do you handle image resolution and file size in a spatial canvas.

In a file browser image files all feel about the same, even though they can vary widely in resolution and size. When you drop them into a spatial canvas web app, that app has to decide how to size. The most 'honest' way would probably be to render them at the resolution of the files - but i think this would often be surprising to people - they won't understand why some images are twice the size of others.

So you want to normalize, probably pick a max size and scale big ones down to that - then you have to think about how you want to manage that for performance. Do you use canvas to resize it down 'for real'? That means if it was bigger you lose that extra resolution. What if the user later actually wants in that big? Probably you want to keep the original size in case needed - but then how do you indicate to the user that some of the images on the canvas could size up without degradation and some would look rough. You list the resolution of course - but if you're a user working on arranging things it feels like its forcing you to switch between this spatial flow to think through math resolution.

You could show a red outline or something when something gets resized above resolution. But while that makes sense in isolation, in the context of an app with a bunch of other things going on I think it could be confusing noise. Plus sometimes if it's just getting scaled up a little it's probably fine - and you don't want to create the feeling of a problem where there isn't one.

My impulse is always to find a way to give the user information more directly so they can then build intuition around it. But it's hard in this one.

You almost want to indicate heaviness or something like that for big files.

It's a knotty problem but that's also probably means there's some fun solutions (or at least experiments) to be done with it.

At heart a question of whether it should be "don't make the user think about it" or "give the user enough info to think about it clearly". Is it real complexity or incidental complexity. And of course this partly dependes on where the output is headed... to be printed? To be imported into another program?

Page
of 3