How Automation Enables Creativity

The robots are coming to help you make cool stuff!

Traverse Davies
6 min readJan 27

--

Image copyright is up in the air — it’s an AI generation, but maybe the author depending on how all of that goes

I saw an ad a while back. It was for voiceover work for video… but it wasn’t the usual kind of ad; it was for computer-generated voiceovers that are indistinguishable from human voice work. We’ve all heard this in practice — sometimes it works well, sometimes it doesn’t.

A few years ago, I was working on a project and had to record some voiceover. Car makes and car models, numbers. All in all, it was a few thousand words. It cost thousands. I had to pay for studio time, a voice actor, and a sound technician. It also took many hours.

Current text-to-speech tools would allow me to do something like that for pennies, and that’s on the high side. That’s if I don’t use the tool advertised. If I use that tool, I don’t know, a few dollars? Maybe a few hundred. Not really sure. I’m pretty technical, so I would probably use a google colab notebook that did text-to-speech or one of the many open-source tools available for Linux.

As mentioned above, copyright uncertain

I have used my own photos for most of my Medium articles. Not all, but most. When I haven’t used my own photos, mostly it’s because I haven’t had a photo that matches the theme. The image above this was generated by an AI, specifically via StableDiffusion running in a google colab. Whatever I need for my article, I can type it out and get an image that matches. If it’s a photo I need, the AI creates something that looks like a photo. If I need an illustration, the AI creates an illustration. There is no reason for me to pay for a stock photo that might be used in millions of places around the web; that is only sort of a match for what I want.

There are animation tools for AI Art Generation as well; you can create a short movie — the movies are weird, not something you can use for actual production. Thing is, google and meta have both released papers outlining methods that allow you to create short clips that you can use. Right now, with what I have access to, at least, there is a coherence issue. Your short clip will be flickery; it will have a dreamlike quality. The new stuff from google is coherent…

--

--

Traverse Davies

I do survival, self-publishing consultation, and writing. Check out my blog: https://dreamtime.logic11.com