Eventually, this led me to writing my own indie book on generative art with Go: https://p5v.gumroad.com/l/generative-art-in-golang, which led me to a talk I gave on GopherCon Europe: https://youtu.be/NtBTNllI_LY?si=GMePA3CfVQZJq2O7
These were great times, but I think the book is not worth buying anymore. Sadly, AI-generated imagery sort of killed the mojo of algorithmic art for me, and I've been trying to get back to it for the last few years.
I wasn't unhappy with some of the results, but it was an interesting and frustrating struggle.
https://www.flickr.com/photos/32832718@N00/17951484570/in/ph... https://www.flickr.com/photos/32832718@N00/19868350512/in/ph... https://www.flickr.com/photos/32832718@N00/17952106385/in/ph...
You can push AI in the same way and end up in some unusual spaces, but the quality often degrades when you get there.
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fr...
https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fo...
I used to (and occasionally still do) make generative art and found this too! Although I'm not really sure why - I still love good generative art and don't really consume any AI generated art intentionally.
I think possibly one of the main things that happened was a lot of online generative art communities got flooded first by NFTs, and then AI generated art. I find it a lot harder to reliably find other people's generative art these days.
It's what I doodle with to generate images using a stack based program per pixel.
Every character is a stack operation, you have 50 characters to make something special.
Mine is also pixel coloring at the lowest level. I have a shading kernel in GPU doing the low level work, mainly applying colors recursively like fractal. I got sick of writing shader code so I make a high level language supporting math operations in concise expression that are compiled to shader code in GPU. The main thing is it supports functions. That let me reuse code and build up abstractions. E.g once I get the "ring" pattern settled, it's defined as a function and I can use it in other places, combine with other functions, and have it be called by other functions.
One of these days when I get some time, I'll formalize it and publish it.
I'm not sure art is still meant to be a widely shared experience and smarter people than should tackle this idea.
For me (and many others), the “how” of art is just as important as the “what”, if not more important. There are installations that reflect this, many of which are interactive and allow the observer to become part of the art itself.
And if you extend the definition of “generative”, it can include many other methods, like swinging a paint can with a hole in the bottom over an empty canvas to create random patterns based on pendulum movement. Myself, like many others, recognize the amount of creativity and effort that goes into this type of “generative” art, especially in comparison to others. I also appreciate the creativity and complexity of the grandparent’s generative system.
I'm glad people are interested in art discourse and exploring arts in general. Art is a very personal thing. Different people see arts in different ways. Yet there's some recurrent themes time after time.
I got my insight in art in musics and on why people love them so much. Musics and songs are basically repeatable patterns with slight variations in multiple dimensions, in pitch, in beat, in tone, in rhyme, in lyrics, etc. The human mind is a super pattern processing machine, as part of our evolution survival traits. Pattern brings structure, abstraction, and comfort. But strict repetitive patterns bore the mind. Human love patterns, but with variation and imperfection.
The human mind is very good in filling the missing pieces in a pattern, again from our evolution survival traits. Our ancestors could look at the tail of an animal and filled in the blank that it's a tiger hidden behind a big rock. The filling of missing pieces is by experience and learning. It really is the original generative AI.
I believe the variation and imperfection in patterns trigger the mind's filling the blank function, which triggers the generative function, which can run wild generating wide range of imagination. That's why arts can have different reaction from different people as each has their own life experience and thus different generated result.
I think art is patterns with variation, imperfection, and blanks at the most basic level. Computer generated art thus needs to fulfill that basic requirement at the least to be called art.
I started out in all the usual ways - inspired by Daniel Shiffman making generative art first using Processing, then p5.js, and now mostly I create art by writing shaders. Recently after being laid off from my job, I actually took my obsession further and released my very first mobile app - https://www.photogenesis.app - as a homage to generative art.
It's an app that applies various generative effects/techniques to your photos, letting you turn your photos into art (not using AI). I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, etc.) pretty much directly inspired by various Coding Train videos.
I love the generative art space and plan to spend a lot more time coming up doing things in this area (as long as I can afford it) :-)
I find this to be a key insight. I've been working on a black-and-white film app for a while now (it's on my website in profile if you're curious), and in the early stages I spent time poring over academic papers that claim to build an actual physical model of how silver halide emulsions react to light.
I quickly realized this was a dead end because 1) they were horribly inefficient (it's not uncommon for photographers to have 50-100MP photos these days, and I don't want my emulator to take several minutes to preview/export a full image), and 2) the result didn't even look that good/close to actual film in the end (sometimes to the point where I wondered if the authors actually looked at real film, rather than get lost into their own physical/mathematical model of how film "should behave").
Forgetting the physics for a moment, and focusing instead on what things look and feel like, and how that can be closely approximated with real time computer graphics approach, yielded far better results.
Of course the physics can sometimes shed some light on why something is missing from your results, and give you vocabulary for the mechanics of it, but that doesn't mean you should try to emulate it accurately.
I read this interview with spktra/Josh Fagin and how he worked on digitally recreating how light scatters through animation cels, which creates a certain effect that is missing from digital animation - and it was validating to read a similar insight:
"The key isn’t simulating the science perfectly, but training your eye to recognize the character of analog light through film, so you can recreate the feeling of it."
He showed some techniques. I think someone asked a question about the best way, but the presenter got a little ranty and basically said the way that looks best to your eye is the best way.
And as you point out, at capture time you can use color filters to affect the image; processing too can lead to fairly different results based on what developer you use.
This is in contrast to color film, which I find to be much more rigid and narrow in how it’s meant to look and be processed; one could argue there’s much less range for interpretation from negative to final image (especially so with slide film, which completely falls apart if it’s ever so slightly over/under exposed).
But it's still useful to have some of those effects catalogued and easily accessible as presets. Photoshop doesn't quite do that, which on the one hand makes it hard for beginners to get a good look, but also leaves some space for those who want to go deeper to get more creative.
Touchdesigner is more popular and I suppose declarative, but vvvv is more general purpose and similar to the processing workflow. It’s a very weird tool I’ve used for everything between MIDI instruments, live installations, escape rooms, VJ rigs and, well, proc art.
I used it create art, basically taking animal photos and using the dna sequence from that animal to recreate the photo using the 4 letters. (I did four passes using different size letters and layered in Gimp). People seem to like them, and they got into an art:science show.
Coding train has a lot of videos on using p5.js Some of them more sophisticated than the childish iconography appears. It’s pretty fun.
And
Both written by the same guy who wrote the Janet for Mortals book, about the Janet language, which supports both those sites.
I'm really wanted to see if I could combine those tools to make Arabic art inspired generative art. Anyone know of any projects which are doing that? There is a lot of crossover in modern generative art and ancient Arabic art.
https://web.archive.org/web/20140701114342/http://www.cgl.uw...
https://web.archive.org/web/20180426122308/http://www.wozzec...
Of course the topic is still alive to some extent, but the above 2 "dead" homepages remain some of the best entry points I've found overall.
One major truth discovered:
Art is always in the eye of the beholder.
I like to think of fine art as a subjective human expression to stir emotion.
I think there are newer versions of this book, though I haven't tried finding it. It's a hefty coffee table book as-is
What a strange claim. How late is too late to be considered early?