To meet client demands for high-quality visuals, I learned that in-painting straight onto high resolution Blender renders, give me the most control.
I also learned to use masking in the latent space to introduce elements that were not recognised by the ML model.
An effective strategy for rendering these ‘unlearnable’ elements into photographically-enhanced CGI environments, whilst maintaining their unique design features.
While the surprising and creative outcomes from text-to-image models can be beneficial, I found pure prompt engineering in text-to-image models restrictive. Leading me more towards image-to-image workflows to cater more effectively to my clients expectations.
The dynamic nature of cloud-based models, although beneficial, impacted the reproducibility of my client work.
This led me to rely on local and open-source software, ensuring a more dependable and consistent workflow.