Using DALL-E AI To Generate Photo Backdrops

TOYPHOTOGRAPHS
5 min readMar 4, 2023

I have been very immersed in AI image creation (or AI-ART if you will) since it became hot last year. It’s a total time suck (and mind suck) because I can spend hours just daydreaming about certain scenes and then recreating them using the AI. It’s fun and enthralling.

I don’t use MidJourney — although everyone else does. I primarily use a home-brew AI generator that I had some friends build for me. But I am also skilled at and use Jasper and DALL-E AI.

Jasper and DALL-E are anywhere from free to very affordable (depending on how much work you do with them) and both create interesting results. The caveat from both is that the images are quite small. So I use Topaz Gigapixel to resize them and it works very well 95% of the time. (bit.ly/topazgigapixel)

My primary use of these tools is to create photo backgrounds or backdrops that I either print 20x30" and mount on gator foam or that I use digitally for compositing in Photoshop.

I also frequently mash-up still photos that I’ve made or stock photos that I have licensed with the AI to create something even bigger or better.

In the case of this image, I was going for a look that was inspired by the new HBO series, (and video game) “The Last Of Us.” In the series, two characters traverse a world full of violent gangs and fungi-controlled zombies. The settings are often old, abandoned buildings, etc.

I want to use a setting like that and juxtapose a more modern character (such as someone from the Marvel or DC Universe) with the old, run-down scenery and then make a photo of the that mash-up. (That photo to come…) I know it’s convoluted, but I am a toy photographer so I get to make it up as I go and as I see fit. That’s half the fun of it.

Anyway — I digress…

The background image you see here is the result of a session in DALL-E AI.

The way this works is you think something up in your head and then try to describe it to the AI.

This is called “prompt engineering.” Becoming good at this means you’ll get results that more closely match what you saw in your mind’s eye. I think this is a marketable skill that is one way existing photographers can use AI to their benefit. Instead of being worried about being replaced by the AI, embrace it and see how you can use it to add to your skillset and your bottom line.

Because I intend to do just that, I won’t list the exact prompts I used to make this image (It took me a long time and I prefer to keep it to myself.) But I will share the process with you and one hint.

By the way, this is not that hard to do. If I can do it, anyone can. It is time consuming. If you practice at it, you will get better and better. I don’t want anyone to be afraid to try this. It just requires an ability to articulate with specificity what it is you want the AI to create.

Let’s start with the most basic of basics. You’re a photographer. You want to make a photograph. (Or the AI equivalent to a photograph) so — you ask for a photograph. Now here’s the hint — you can ask for a photograph of something: but you can also (and should also) specify things like the type of camera, film, lens, lighting conditions, etc.

In the case of this image, one of the prompts I used was “Daguerrotype” because Daguerrotype images lend themselves to this old and abandoned building look.

Learning how to prompt an AI is something I’ve spent months doing. Since this is just a short article and not a white paper, I don’t want to get too deep in the weeds here. I may write an e-book or something on this subject but one thing that is stopping me is the fact that this AI world is changing so quickly, anything I write might be outdated by the time it ships. So for now I am content just exposing people to the WHAT’S POSSIBLE side of AI Art, because in my opinion, that is where most education really happens. Seeing what’s possible has always been my own personal key to learning any new skill.

I have really benefitted from this new AI Art. For the stuff I do, I either mash-up the AI-generated image with an actual photo (as described above) or I go into the particle effects or render engine on Boris FX Optics and add SFX (as I did here) to transform the image. This is an important step. Here’s why.

When I take the aforementioned steps, I am creating what the Library of Congress calls a “transformative work.” Normally, Copyright on an image created by DALL·E AI belongs to DALL-E. But if I mash it up (like I did here) then the “transformative” work belongs to me and I can register a Copyright for it.

If you’re amateur who does no commercial work, then this part of my article doesn’t matter to you. Ignore it and go have fun. But if you’re a professional, you’ll want to do some research on this as it’s an interesting and emerging area and very pertinent to AI art creation.

CONCLUSION

Every time I share a story or a photo or an article that involves AI image creation I get attacked by trolls. That tells me I am on to something. Fortunately, I get as many or more contacts thanking me for the information and hearing that I inspired someone else to try — which is the payoff I am looking for. I really want photographers and other visual storytellers not to be afraid of or angry about AI. It’s just another tool folks. Nothing to see here. It’s no more evil than a circular polarizer! Once you learn how to use it, you can apply it as you see fit to your own workflow and maybe make something amazing. I am rooting for you.

Remember, toys are joy.

For a list of my toy photo gear and props go to:
bit.ly/toyphotogear

Follow me on Instagram:
https://www.instagram.com/bourne.scott/

--

--

TOYPHOTOGRAPHS
TOYPHOTOGRAPHS

Written by TOYPHOTOGRAPHS

I'm a toy photographer. I'm also delving into AI Art. I also help people get the most out of their Fuji X100 series cameras. (C) 2023

No responses yet