In Short
Digital art made in collaboration with various artificial intelligences. Mostly imaginary girls.
Artistic Goals (in the most general sense imaginable)
Make things that make good things happen.
Official(ish) Bio
My name is billy Z duke, and I'm the "human" in "inhumantouch." I have a Bachelor of Fine Arts degree (with top academic honors) from Mason Gross School of the Arts (NJ state school Rutgers University's arts college) and was the first Rutgers student ever accepted for a junior year abroad at the Slade School of Fine Art in University College London. My primary concentration back then was oil painting, but as time went on, I branched out into multiple media, ranging from minimal, digitally-printed text-only pieces to sculptures in various media and larger-scale, site-specific installations, usually used as settings for related performances or video presentations.

"Magnificent", Self-Portrait with Sheet Magnifier, photo + Photoshop, 2000 + DALL-E 2 Digital Outpainting, 2022

After graduation I moved to NYC and entered the job market, which, to be perfectly honest, I've mostly despised. In retrospect, I think I went about acquiring the wrong kinds of jobs for my temperament (full time, in-office, at a desk), and this would repeatedly lead to relatively rapid burnout in every subsequent position. Early on, I began to veer toward programming, teaching myself web development because a) I didn't think I could hack a design gig involving constant requests for revision to art I'd created, and b) I initially found it less stressful to work with computers than with co-workers. Although coding has kept me mostly employed and able to support myself for the past couple of decades, I've become trapped in that "career path" because of the experience filling out my resume, even though I could walk away right now, never look at code again, and have no regrets about that. Like they say, be careful what you get good at doing.
In addition, I was fairly disillusioned with artmaking in general after getting my degree (the program at Mason Gross was much more focused on helping students hone their artistic intentions and craft personal manifestos, whereas I had been hoping for a primarily technical schooling--at the time I had zero desire to analyze WHY I made art, I wanted greater and wider mastery of the skills used to make it), and spent a lot of time just living my life and using my free time to focus more on music composition and performances with various backing bands as my artistic outlet. The latest such collective was The Wrong Windows, which technically still exists, although, after the unfortunate societal decimation of the pandemic, it's back to basically just me, making continued production/performance more of a time-and-energy suck than it already was (and believe me, as much as I loved doing it, it certainly already was).

Self Portrait Outtake for 2021-11-30 Live Solo Show Promo


I always suspected I would return to visual art at some point, but was still pleasantly surprised to be so inspired by the discovery of text-to-image AI in late summer 2022, and have been doing little else with my spare time but messing with that ever since. I decided to return to my high school artmaking roots, when I would spend a lot of time doodling imaginary girls in the margins of whatever notebook whose contents I happened to be bored stiff by at the time, and quickly developed a system (primarily using DALL-E 2, often with a dash of GFPGAN facial restoration and always with generous helpings of Adobe Photoshop) to create enticing alterna-girls from sheer ether, and place them in settings that were as intriguing as possible.
Working with any given generative AI as a collaborator has been and continues to be an extremely interesting experience, and somewhat a different one from making more traditional art on my own. Although these AIs cannot be knocked for significantly compressing (by several orders of magnitude, it would seem) the amount of time it takes to craft a decent image, this particular partnership frequently requires me to loosen my grip on exactly what I'm trying to achieve, to pick up on subtleties (or not-so-subtleties) that the AI is suggesting and run with / emphasize / enhance them, rather than reject and quash generation after generation until it happens to produce something more in line with "what I want." There's no guarantee, after all, that taking the latter path will ever result in a success, the only guarantee there is massive frustration.

"Curse of the Hairy Palm (Panel 2: C-Face)", Oil on Canvas, 1992 + DALL-E 2 Digital Outpainting, 2022


Of the publicly available options, I originally preferred using DALL-E because of OpenAI's brilliantly simple online editor interface, which made inpainting (altering select portions of an already-generated image) and outpainting (extending the frame of an existing image, i.e., "zooming out" or "uncropping") extremely easy and quick (at least when their site wasn't being hammered too hard and my internet "provider" wasn't clamping down on my bandwidth). Creating and revising images in this piecemeal, almost mosaic fashion allowed me to feel more involved in a process of ongoing creation, as opposed to the "one-and-done" nature of prompt-then-image that one initially got with alternative engines Midjourney and Stable Diffusion (my current MacBook is one generation too old to run Stable Diffusion locally [goddammit], so I've resorted to renting remote server time online to experiment with various iterations, altho even the most intuitive/useful of those I've found so far [mage.space, krea.ai] have so far still fallen somewhat short of replicating the initial sense of interaction/collaboration I could achieve using DALL-E 2's "labs" site). As a result, I've learned more about the intricacies of Photoshop in the past 2 years than I had in total over the previous 20.
At a certain arbitrary point in time (specifically, early November 2022), my dependence on DALL-E definitely began to have significant downsides. Suddenly, it took a lot more effort (and more generations) to get output I considered sufficient/satisfying/high quality, and I've spent a lot of time since trying to figure out if, after an initial string of dozens of almost instantly "successful" images, that was due to some adjustment in my own perception/expectations, or some failing on the part of the AI, or a little bit of both, and to tell you the truth I still can't say for sure. I have just continued to hope that they release an upgraded labs site sooner rather than later. Meanwhile I've been spending much more time experimenting with and learning the ropes of Midjourney and Stable Diffusion, both of which are advancing by leaps and bounds on a near-weekly basis, and so far, the wonders have yet to cease.

"Arranged Marriage", Performance/Installation Detail, 1993 

Back to Top