News

October 3, 2023

Unitary raises $15m for more nuanced nipple detection

The company is already working with large social media companies to try and improve moderation of harmful content online


Tim Smith

3 min read

Unitary founders Sasha Haco and James Thewlis

Sasha Haco — CEO of London-based AI content moderation startup Unitary — spends longer than your average founder thinking about nipples. 

“The traditional way to do this is to ask, ‘Do we have nudity? Is this a nipple? Yes or no’,” she explains. 

“But the customer’s content policy might say, ‘If someone’s breastfeeding, that’s ok,’ or ‘artistic nipples are ok but nipples in a pornographic setting are not ok.’ The hard part is the nuance and the context. We talk about nipples a lot.

Advertisement

Unitary has today raised $15m to scale its technology, which uses AI to identify harmful content online — the investment was led by Creandum with participation from Paladin Capital Group and Plural.

What does Unitary do?

The startup has built an AI model that analyses video and image content to assess violations of its customers’ safety policies. 

Content moderation has historically used a lot of manual labour — Meta alone employs around 15k people to view uploaded material and flag harmful content — a process which is expensive for the platforms and, at times, disturbing for those doing the moderating.

“There’ll always be some human element, but I think a lot of it will become automated. AI can replace a lot of that traumatic grunt work,” cofounder Sasha Haco tells Sifted.

Unitary was founded in 2019 and now processes “10m videos a day” for clients that include some of the “big social networks” (the company isn’t able to name names just yet).

Haco says that the startup’s clients span a broad range of online businesses that also include dating sites, marketplaces and other websites that you wouldn’t immediately assume need content moderation services.

“Gaming is a really interesting use case for us, as are file storage companies,” she explains. “You wouldn’t believe it but people are using second-hand car websites for drug trafficking.”

What’s the market like?

Haco says that content moderation is a complex technical problem to solve, as simple AI detection of things like nudity often doesn’t tell you the whole story. Unitary’s 40-strong team of mostly engineers have spent the last four years building an AI model that analyses the imagery, text and audio of online videos to assess whether content is harmful. 

Others in this space include Silicon Valley-based companies Hive and Spectrum Labs (recently acquired by New York-based online safety company ActiveFence).

“A lot of competitors keep getting bought and acquired — I don’t know if that’s a good thing or bad thing,” Haco says. “We want to be a big, successful, impactful company. I’m not interested in an acquisition.”

Advertisement

She adds that she hopes to scale Unitary so it can become a “horizontal safety layer across the internet”, operating like a “Cloudflare for safety”.

To do this, Unitary will now be deploying its fresh funding to beef up its commercial team. Haco, who previously worked as a theoretical physicist studying black holes in Stephen Hawking’s research team, says that the jump from academia to startups has been a rewarding, if hair-raising, one.

“My rate of learning feels so much higher in this role than it did in academia… It’s been the most exhilarating hell I’ve ever lived through.”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn