The latest AI tools are fascinating! I am losing count of their uses that pop up. I will post more practical content soon; this post is to share a fun exercise. Here is what I did:
I repeatedly asked ChatGPT to describe “a nonsensical image,” adding very simple prompts, like asking the image to be in an office environment, or add colors. I repeated five times. Each time I copied the output – about a paragraph – and fed it as a prompt to MidJourney.
Here are the five images (one above, four below) generated in one session – a dialog with ChatGPT – in two minutes, from the same one-sentence description and with only slight additional modification requests. I did not “reject” any images; all seemed worth glancing over and appreciating their diversity. Had I used MidJourney parameters (I did not), the outputs would also vary more in colors, textures, lights, degree of “realism,” etc.
(Who gets credit for these images is a complicated issue.)
Here is a wild thought. Not only will there be job titles like “AI Prompt Creators” (people who describe images to be fed to text-to-image algorithms) in numbers. There will be “AI Pre-Prompt Designers.” The role will:
- Know what to tell ChatGPT for it to create a prompt for an AI drawing program to draw in a desired way (I promise to come back and improve this sentence.)
- Hand-edit the prompt if necessary.
- Feed it to an AI drawing program.
These positions will require both creative and analytic minds.
My advantage for the toy experiment was, of course, that I had no specific outcome in mind. When I do, it is hard to tame any of these AI systems, either content-creating or drawing. There is a lot to learn. But only imagine the possible outcomes.