Legal
Our code of conduct for the use of text-to-image generation tools
At Sifted, we love to cover the ways technology is being used to create new and exciting things — and, where possible, we love to use it ourselves. And let's be real, text-to-image generation tools are next-level cool. They open up endless possibilities for us as a media and data platform.
These are not the only visual tools we use at Sifted — we commission graphic designers and artists and use visual assets available for the media. Text-to-image generation tools are just one more way we can bring our stories to life and strengthen the visual dimension of Sifted’s brand.
We also understand that there are questions about these new tools. That’s why we have created this code of conduct and decided to make it public.
- We are committed to using text-to-image generation tools responsibly and transparently — in a way that respects the rights of artists and creators.
- We welcome feedback and criticism about our use of these tools.
Responsible, and transparent use
Our first commitment is to using generative AI tools responsibly and transparently.
One of the ways we’re working towards this goal is by monitoring the models we use and crediting which models we are using so our readers are aware. Not all the models are fully transparent on how they’re trained, and we conduct thorough due diligence each time we use a new model so we know what we’re getting into. We don’t use any model trained specifically on and meant to replicate specific artists’ work.
We take the inherent bias of models into account while working with them — for example, if we ask a model to make a picture of a VC, it’s likely that the image will be of a man. If we know the model is biased, we can work around that and make sure we are featuring far more than just images with male VCs.
Additionally, we’re committed to conducting regular reviews and audits of our use of text-to-image generation tools to make sure they are fair and inclusive. We recognise that laws and regulations surrounding the use of these tools are constantly evolving, so we must stay up-to-date with the latest developments in this field.
How we use text-to-image generation tools
When we create an image for an article or any other Sifted project using generative AI, our workflow normally goes as follows:
- We create the bone structure of the image we want and do a bunch of different prompt explorations, in other words, play around with different queries to ask the AI to deliver a specific concept. We never use the names of living artists in the prompts.
- We then do a bit of inpainting — aka filling in missing pieces — to add or remove a couple of elements
- We then iterate quite a few times with image to image
This creates a flow that has probably 10-20 prompts minimum in it. We decided there were too many to share the individual prompts with our readers, so for now we have decided to share some general guidelines.
Learning to talk with the models in order to collaborate and get fantastic output is something that takes time and effort. Our team is constantly training and learning so that they can use those skills responsibly and be aware of bias in models.
We welcome feedback and criticism
We want our readers and the broader community to call us out if they think we are using a model we shouldn’t or want to give us tips or suggestions on which models to use. That’s why we have created a form for our community to give us feedback. You can just click here -> link to the form
We take all feedback seriously. If we’re made aware of any wrongdoings related to a specific model, we will do a thorough investigation and decide on a course of action.
TL;DR
We are committed to using text-to-image generation tools responsibly and we will continue to monitor and adapt our use of these tools to ensure that we are always doing the right thing. We welcome feedback and suggestions from our community and we are dedicated to being transparent about our use of these tools.