Jim Babbage's profile

Using Generative AI to Perfect Image Cropping

Note: This article is part of my series on practical applications of Generative AI for photographers.
I dabble in bird photography and have even invested in some equipment to improve my results. Yet, capturing birds poses a challenge, given their swift movements. Added to this, even minor movements by the photographer (like breathing) can significantly affect the composition, especially when using a long lens. Consequently, there have been instances where my subjects ended up too close to the frame's edge or were inadvertently clipped.
Clipping a wing or a tail is not ideal, obviously, and if your feathered subject is too close to the edge of the frame, it could prove difficult when cropping or printing the image. Sometimes you just need a little more headroom, or space for the subject in which to "move."
Thanks to an improved Image Generation model (Firefly 3) in the Adobe Photoshop beta, combined with the new ability to enhance detail in generated content, those near misses could be a thing of the past, visually, if not practically.
Room to Move
By using Generative Expand in Photoshop beta, I was able to give the the swan more room to fly. 
My skill capturing birds in flight needs a lot of refinement, so being able to adjust the aspect ratio of an image by generating a bit more background can really impact the overall composition of an image.

If you look closely at the centre image, you might notice a slight color shift in that extended area a bit of magenta. By using a linear gradient mask and applying a small amount of Tint (green) and Grain, I was able to counteract that minor issue.
Low Ceiling!
When you're hand holding a camera and shooting at the equivalent of 1640mm, minor movements - even breathing - can significantly impact a composition. 
Generative Expand in Photoshop gave me that little extra head - and tail - room I needed for this image of the sparrow. 
Extending a background in Firefly isn't perfect, yet; the AI struggles to replicate - of all things - image noise! This could be due to the resolution limitation (see below). The middle image of the sparrow was the direct output from Firefly. If you look on the left side of the image, you will notice a subtle difference between the original edge of the image and the extended area. I had to bring the image in Lightroom and use a gradient mask to try and match the grain and overall brightness. The end result is acceptable but could be improved with a bit more work in Lightroom. 

Human input and decision-making is still a critical component of working with Generative AI. 
While the Firefly model 3 is noticeably better, Firefly is still generating content at 1K resolution (1024x1024 pixels), so when you extend beyond that resolution in your selection (or background extension) could still drop in detail.

A welcome addition in the Photoshop beta is the ability to enhance the detail of generated image area. 

Enhance Detail improves the sharpness of variations generated by Generative Fill. With this improvement, variations can have greater detail clarity and blend more seamlessly with the existing image.

This could be used when extending a background or when adding or removing elements from a scene. Just mouse-over the variation you prefer from the Generative Layer Properties and tap the Enhance Detail icon. If the icon is greyed out, it indicates that you have either already applied Enhance Detail, or that the option is not needed.
In my limited tests so far, the end result is slightly better when trying to improve a large area. If you opt to use the older method of multiple, smaller selections, you will like get better quality results from Enhance Detail.

I've also found you get more detail when the image area has more detail to work with. It may sound rather obvious, but if the area you're extending is out of focus and has little detail (like my bird examples), there's not a lot of detail to truly refine. This is only my opinion/observation.
In the example above, there is a noticeable difference between the original (left) and the enhanced version (centre), particularly with edge detail, when viewed at 200%. For context, I've also included a capture that shows how much image area was added.
Notes: 
1) Enhance Detail does not impact your Generative Credits
2) Enhance Detail can only be used on a Generative Layer
Losing a wing tip or tail tip isn't the issue it used to be, although I will always strive to capture the subject in its entirety when possible.
Minor edits like this are the tip of the iceberg;  the quality of Generative AI continues to improve, be that with Adobe Firefly, Midjourney or Dall-E. As I often tell people, "Today is the worst it will ever be." Yet another reason why it's important that - as photographers - we know what the technology can - and can't - do. It can't be ignored or hushed into non-existence.
Portrait to Landscape
More extreme examples are possible. You might recall my red canoe photo, where I extended all four edges of the original image - and even successfully printed it. In the example below, I went from portrait to landscape orientation.
In the example above, Firefly insisted on adding purple to the extended area. I did the best I could to reduce it (others would likely do a better job). This is the first time I've had such a noticeable color bias, but then again, I am also using a beta product. Converting the image to greyscale or applying a vintage color profile negates the issue.
Content Credentials
One advantage presented by Firefly is that you can embed Content Credentials into any of your images that incorporate Generative AI. Content Credentials are like an ingredient list for your image, detailing the various edits that have been applied to an image. 

You have the option of enabling this feature in Photoshop through the Settings dialog and through the Content Credentials panel, so that when you export a jpeg file or save and reopen a PSD, the images are tagged with The Content Credentials logo, and tamper-proof metadata is added to the file. You can even upload that metadata to the cloud, so it can always be accessible by your audience at the Content Credentials website

Note: Content Credentials can also be enabled in Lightroom Desktop and coming soon to Lightroom Classic.
Inspecting the image seen above takes me down a digital audit trail of the work done to the image (in general terms) and what - if any AI technology was used.
Wrapping Up
I hope this has given you some ideas and inspiration on where Generative AI could be useful in your workflow. Again, you choose to use the technology or not, but it's always a good idea to understand it.

Knowledge is power, as they say.

Until next time!
Using Generative AI to Perfect Image Cropping
Published:

Owner

Using Generative AI to Perfect Image Cropping

I dabble in bird photography and have even invested in equipment to improve my results. Yet, capturing birds poses a challenge, given their swift Read More

Published: