Close sidebar

Clear
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ep 01
Dylan Fox of AssemblyAI
Coming Soon
NOW LIVE
Ep 02
Daniel Sturman of Roblox
Coming Soon
Coming 10/24
Ep 03
Mike Murchison of Ada
Coming Soon
NOW LIVE
Ep 04
Shensi Ding of Merge
Coming Soon
NOW LIVE
Ep 05
Alexandr Wang of Scale AI
Coming Soon
NOW LIVE
Ep 06
Jack Krawczyk of Bard
Coming Soon
NOW LIVE
Ep 07
Victor Riparbelli of Synthesia
Coming Soon
NOW LIVE
Ep 08
Cai GoGwilt of Ironclad
Coming Soon
NOW LIVE
Ep 09
Daniel Yanisse of Checkr
Coming Soon
NOW LIVE
Ep 010
Glen Wise of Cinder
Coming Soon
NOW LIVE
Ep 011
Kate Parker of Transcend
Coming Soon
NOW LIVE
Ep 012
Rene Haas of Arm
Coming Soon
NOW LIVE
Ep 013
2024 AI Predictions
Coming Soon
NOW LIVE
Learn more about Season 01
Ep 01
Andrew Bialecki of Klaviyo
Coming Soon
Now Live
Ep 02
Vlad Magdalin of Webflow
Coming Soon
NOW LIVE
Ep 03
James Theuerkauf of Syrup
Coming Soon
NOW LIVE
Ep 04
Suresh Vasudevan of Sysdig
Coming Soon
NOW LIVE
Ep 05
George Kurtz of CrowdStrike
Coming Soon
Ep 06
Ivan Zhou and Amit Kumar of Accel
Coming Soon
COMING 5/2
Ep 07
Marcelo Lebre of Remote
Coming Soon
COMING 5/9
Ep 08
Jon Noronha of Gamma
Coming Soon
COMING 5/16
Ep 09
Barr Moses of Monte Carlo
Coming Soon
COMING 5/23
Ep 010
Alex Bovee of ConductorOne
Coming Soon
Ep 011
Sanjay Beri of Netskope
Coming Soon
now live
Ep 012
Jackie Burns Koven of Chainalysis
Coming Soon
Ep 013
Marc Lore of Wonder
Coming Soon
Ep 014
Jeff Shiner of 1Password
Coming Soon
NOW LIVE
Learn more about Season 02
Season 01 • Spotlight on AI
Season 02
Episode 010

Cinder’s Glen Wise on trust and safety threats and holding AI accountable

A conversation with the Co-Founder & CEO of Cinder

Generative AI has transformed technology, presenting new opportunities and raising concerns about potential harm. There is no shortage of questions about how to hold generative AI accountable – or if we can at all. Glen Wise and his team are answering these questions through Cinder’s trust and safety platform.

Because of the recent advancements in artificial intelligence, companies face new challenges in identifying how users, networks, or content may violate their terms of service. Glen and his team at Cinder are uniquely qualified to help. Before Cinder, Glen worked for the US Government and, after that, in threat discovery at Meta. There he met his co-founder,  Brian Fishman, a renowned expert in counterterrorism and hate speech. Together, they collaborated to build community threat intelligence capabilities to combat some of the biggest Internet abuse campaigns of our time involving hate groups, terrorist organizations, and election disinformation. 

Cinder was born out of Glen’s efforts to improve threat intelligence at Meta. To understand the tools and best practices other companies were using, Glen spoke to teams in delivery, ride-sharing, gaming, and AI, only to come to a startling realization: there weren’t any. To meet the need, Cinder’s trust and safety platform launched in 2021. Today, it is used by both rising AI companies and the top tech companies adopting new technology to help combat internet abuse at scale.

“AI is being used across almost every single threat vector today, and we’ve seen real examples of how AI can be used for harm. We need to take this seriously. Can we actually hold this thing accountable?” – Glen Wise, CEO and Co-Founder of Cinder

Glen balances the optimism of Silicon Valley with a unique perspective—AI alone can't replace humans in ensuring safety. Platforms will always involve humans in critical decisions and people will always remain essential. On this episode of Spotlight On: AI, Glen and Sara Ittelson discuss the security challenges that have been accelerated by AI and the enduring responsibility humans have in countering them.

Conversation Highlights:

  • 00:00 - Intro and Glen’s background
  • 03:00 - The journey to Y Combinator and the founding of Cinder
  • 04:00 - Introduction to Trust & Safety and the impact generative AI has had on the space
  • 08:00 - How AI can combat abuse and where it could be used as an engine for harm
  • 14:00 - How investors like Sara discern product-market fit with new AI technologies
  • 17:00 - Glen’s predictions for the future of AI in T&S and the role humans will play
  • 25:00 - Advice for founders to build resilience earlier on so their platform isn’t used for harm

Host: Sara Ittelson Partner at Accel

Featuring: Glen Wise, CEO and Co-Founder of Cinder

Read more about Accel’s relationship with Cinder:

Explore more episodes from the season:

Read More

Sara Ittelson (00:12):

I'm Sarah Ittelson, your host, and I'm so happy to be here with Glen Wise. He is the CEO and Co-founder of Cinder. Let me first give our listeners some context on you, Glen, and what you've been doing at Cinder. So Cinder is a trust and safety platform that helps organizations combat internet abuse at scale. You guys uniquely have an incredible background for this from time at Meta, the US government, Palantir. I'm sure you'll tell folks a lot about it, but you're uniquely qualified to be facing this and one of the biggest challenges, honestly, facing technology companies globally right now only accelerated by AI. And so that's what we'll talk about today. And you're also working with some of the biggest names in AI, so I'm just really looking forward to today's discussion. So let's just back up a bit and start with you and why did you found Cinder?

Glen’s background and journey to becoming a founder

Glen Wise (01:06):

Yeah, so my background, I worked the US government and after the government I worked at Meta as a security engineer. I worked on a team called Threat Discovery, and on that team we collected external intel, brought it in and scaled the detection enforcement of that intelligence. And at the time, this term trust and safety wasn't really used at Meta. It still really isn't like integrity, which I think is interesting because it's the integrity of the content on the platform, the integrity of the connections and the accounts. And so our team supported integrity efforts at Facebook with threat intelligence and at Facebook we saw a lot about how they ran their internal operations. So how do they bring data together, how do they build models, how did they use human labelers and human reviewers to solve some of these really complex challenges? And after my time at Facebook, I really wanted to know how other companies did this as well because after three years, kind of like, should I join another company?

(02:15): Should I become a trust and safety engineer somewhere else? Should I build up a trust and safety team somewhere? And I spoke to all these other companies across delivery, ride sharing, social gaming, marketplaces, AI, and I realized that there's absolutely nothing on the market for them and all their internal tools are like we're a decade behind where Facebook was. So after all this, you get the itch and you try to do everything in your power to not found a company and then I just couldn't do it. So applied to Y Combinator, got into Y Combinator, they really pushed me to go out and do this and formed a really great team. And a year and a half later we've raised our series A. We have some really amazing customers across AI, marketplaces, social, and we're building something that really does help companies keep their users safe, which is really motivating for me in the entire team.

Sara Ittelson (03:15):

For folks that aren't super familiar with trust and safety, what are some examples of the types of harms and threats that you're helping companies combat and protect against?

Glen Wise (03:26):

So I view trust and safety as how can your platform be used for harm? That's really kind of what it is, and a lot of companies have terms of service that they use to stop and combat this. And so it's really how are users or networks or pieces of content violating your terms of service. So for a marketplace that might be you have fraudulent listings or if you're a ride sharing, that might be someone's getting harassed or you harass your driver, or if it's generative AI, you might be asking it and how do I build a bomb? So all of these are examples of trust and safety issues that our customers you Cinder to help combat.

How Generative AI is impacting trust and safety

Sara Ittelson (04:07):

You kind of touched on it there at the end with generative AI, but you founded this before the recent explosion of AI into the cultural zeitgeist. How has it changed trust and safety and the work that you're doing at Cinder?

Glen Wise (04:21):

Generative AI will have a strong impact on trust and safety operations and AI already. You see every platform today has some sort of either classifier that they're purchasing externally or classifier that they're building in-house and they have these human labelers or human reviewers that will review content and help train their models. The great thing that we've seen with generative AI is the ability to actually do some of that human labeling using large language models, which is awesome. And there's still so much work that can be done, but we're already seeing the effect of large language models and what it has on certain decisions. Now the thing about trust and safety is that decisions can be incredibly complex. So we view decisions as a spectrum where you can have very simple decisions, which is this nudity yes or no, right? That's a pretty simple decision that we've had classifiers that can identify that all the way to you are working on a dispute that involves three parties.

(05:32): It might involve a driver, a merchant, and someone that's getting a delivery and how do you actually resolve that dispute? And we see LLMs being a tool that you can use to help the human solve that. But at the end of the day, if you are, let's say you are part of the sharing economy and you hire someone to walk your dog and something happens to the dog, you want a human to be on the other side resolving that and calling you and making sure everything's okay and providing you support. So it's really kind of a balanced approach. 

Sara Ittelson (06:09):

How are you helping companies navigate? Where's the AI mature enough to be deployed more independently with supervision and where is it more nascent? We don't have enough data, we don't have clarity on our policies, all that human intervention, how do you help folks navigate where is the AI ready to help versus what is the apparatus they need around the AI?

Glen Wise (06:35):

Yeah, there's definitely a lot of progress that's already been made, which is like standard classifiers. So fraud is a really great example. You have thorn and safer for child expletive material and CSAM, and those are use cases that AI has been and will continue to be really effective in helping identify, there are still use cases that it's going to be really challenging to understand how can we detect and how can we stop this disinformation, for example, especially when it's coordinated by a nation-state. Will we ever be able to identify that via AI? Probably not. I think it's going to be a cat and mouse game, but there's some really great companies that are coming out that are building some of those detections. We have companies that are building some of that detection in-house and anything adversarial, it's going to continue to evolve and it's going to continue to, one side does something and the other side will do something and they're going to be back and forth until who knows what the outcome is.

The ways AI is being used to cause harm

Sara Ittelson (07:40):

We've been talking a lot about AI as part of the solution set, but maybe the flip side of that is how are you seeing AI being used as sort of the engine for harm and more efficient harm on these platforms?

Glen Wise (07:55):

AI is being used across almost every single threat vector today. So that's everything from generating CSAM from generating disinformation. There's a really great report put out by Microsoft recently showing how China is using in their information operations using generative AI to create contents that just sell discord into the US politics. And we're already seeing examples from of our customers see just massive amounts of fraud and just overall low quality content thanks to it. There's also, how much of it is people really worried about it versus how much of it is, and I'm really curious what you think when people talk about AI, there's always this question or how much of this is real versus how much of it is still kind of imaginary right now. But certainly for some of these threats, we've seen real examples of how AI can be used for harm, which is pretty, we need to take pretty seriously.

Sara Ittelson (08:57):

Yeah, I mean I think as an investor, it's incredibly exciting to see what a small team can achieve with some of this new technology. But wearing my Cinder board member hat, I think about that same sort of efficiency being employed by bad actors and get much more concerned about it because you realize maybe it used to be that you would only target the Facebooks of the world really, really large platforms with your disinformation misinformation, but if it's so easy to sort of generate this content and distribute this content, you can really go to sort of that torso and long tail and those companies don't have the resources that you had at Meta to combat these things. So I think it's that much more important that they need something like Cinder to help them be able to have some defense and offense against it.

Glen Wise (09:50):

Go to Google and just search some marketplace and then in quotes as a large language model or as an AI language model. And you'll see so many fake reviews where they're like, as an AI language model, I cannot actually provide a review on this item. And people are just write these scripts and will mass spam all of these marketplaces, not really caring who it actually is, which is pretty interesting to see.

AI as a forcing function for improving trust and safety, and the process of developing confidence in new AI models

Sara Ittelson (10:17):

Everybody's awareness about what's possible with AI has brought to the forefront trust as you start to question everything that you see and engage with. I mean, how has that changed what it's like for you as a founder just operating in sort of trust and safety, which maybe was a category that people were less familiar with. And I feel like AI has really accelerated the degree to which companies have to care about it at the forefront.

Glen Wise (10:45):

We're seeing enterprises, government agencies really concerned about, okay, can we actually hold this thing accountable because what if this thing doesn't do what we want it to do, especially for generative models or neural networks where it's so at the end of the day, there's randomization there or you actually don't know what's going to come out of it. And how do you build trust in a system like that where it's almost like another person to some aspect where again, the level of randomness. Now there's this whole issue of personifying gen AI models, and we could talk about that, but what do you think though, when you look at AI product companies that are saying, oh, we build this AI system for enterprise, or we build or we are an enterprise and we want to invest in AI, what does that look like from an actual use case perspective, but also a trust and assurance perspective?

Sara Ittelson (11:45):

I mean, I think you touched on it that you want to get to a level of fidelity on what is the job to be done and with what level of fidelity can the AI execute that within this context? And then actually, I think your decision framework applies here really well because you're kind of thinking about what is the criticality of this decision and what is our tolerance for errors? And I think that depending on the scenario, you can have a higher or lower error tolerance. And within the context of things that you are dealing with where there's genuinely user health and livelihood at stake, that's really a context where you don't have a high error tolerance. So I think especially in enterprise or in consumer, you need to understand what is the use case and therefore what is my willingness for trusting the AI versus needing more oversight?

(12:42): I think the other thing that's interesting we have chatted about is the most successful companies are continually iterating and building and launching new features. And when you have new products, new features, by definition, you have a very limited dataset about how people use and engage with it. And so those frontiers of technology and innovation are always going to be a place where there's more sparse information with which to train models about how people engage. So over time as things mature, we can probably trust it more, but at the cutting edge, I think you need a really tight loop with your team and high engagement with what is the AI doing?

Product market fit for emerging AI technologies, and how Cinder developed their own AI principles

Glen Wise (13:24):

How do you think about that with investing that how do you take a bet on something, right? If you're investing in AI, that isn't growth stage. Where I view AI to be really effective is once you have the workflow in place, if you're Salesforce and you have all these people using CRMs, you can build AI on top of that to make it really valuable. You can create, I think Gong is doing this really well as an investor. How do you assess product market fit with newer AI technologies?

Sara Ittelson (13:57):

Especially at the earliest days? So much of it comes back to sort of the fundamentals of is this a really acutely felt pain point? Is this a team that really understands it deeply? Is there a reason why in this moment there's an opportunity for behavior change and resolution of that problem? And are these the right people to bring this solution to bear? I think you're a good example of actually in the hands of the right founder who deeply understands their customer and their problem, you will regulate where and when and how is this technology going to serve me well in achieving my ultimate goals of transforming XYZ space? But maybe in that topic, why don't you talk about how you've come up with your own AI principles, what are you choosing to build in-house versus not? I think that's been a really critical decision and juncture for the company that people might benefit from hearing about.

Glen Wise (15:00):

I left Meta December of 2021, and so started Cinder right after that. And if you look at the marketplace that was trust and safety at the time, you only had classifier companies or you only had Spectrum Labs and Hive and Centropy just sold Discord. But you talk to companies and you say, you talk to the head of trust and safety or even CISO say, what really bothers you about doing this job? What really keeps you up at night? And no one told me that it was AI. No one told me that it was like they didn't have great detection, they had data and things were coming in for them. It was once something came in, what actually happened. And so listening to this, you would hear, oh, it would come in via a CRM or our Zendesk or a Salesforce, and then we take that ID, we bring it into an internal tool or we query SQL or we put it in a spreadsheet and we do some VBA on it, and we then bring that into an admin tool and we bring it to a different tool, do enforcement.

(16:08): And then we had to send a physical letter to law enforcement. There were so many manual processes here from every company and even Facebook, we had a lot of manual processes. And so every leader was really concerned about operational inefficiencies more so than how can I just detect everything and not be done? If you talk to someone in trust and safety, I think people who are not in trust and safety struggle with this, where they look at trust and safety and say, oh, this is solvable. You just build a classifier and you can solve it. And then you're presented with a challenge around, okay, we're in the ride sharing, someone just got held up by gunpoint. What classifier do we use now? Because I don't think there's really a classifier to solve that. And the point being, these companies really struggled with workflow tools and their workflows.

(17:03): And so we looked at that and we said, okay, let's solve that problem first. Let's build the system. Let's build the CRM equivalent. Imagine you go to the sales team and they're all building their CRMs in-house, or you go to a cyber team and they're doing endpoint management and they're building their own firewalls. They wouldn't make no sense, but trust and safety teams were doing this right? So we can build them. A system that works for them is tailored for them, and once it's built, then we can build AI tools on it. At the time we saw GPT-3 coming out on the horizon, right? GPT was out and it was pretty cool at the time. And Facebook was there rumblings of them releasing something open source. There were rumblings. Google releases something up with source and we're not going to compete with the billion dollar budgets of Microsoft, Google, and Facebook, but we can compete on the operational side and the workflow side. And so we were very intentional when we first started Cinder to focus our product and build towards that workflow solution. And now that workflow solution is built, sure we can build AI tools. We help our customers train their own AI models and we help them do data labeling and we help them do annotation, which is just like an added benefit of building a really great workflow tool that genuinely, really helps solve a problem.

Workflow tools, AI integrations and the division of labor between humans and AI

Sara Ittelson (18:28):

To what degree is that sort of operational platform driving efficiency for your customers? I think sometimes when people think about, you talked about, can AI just solve this trust and safety problem? You want to just send it to a black box, tell me the answer. And we know that that's an oversimplification. You're seeing incredible acceleration of processing amongst your customers. So order of magnitude, what does that look like?

Glen Wise (18:56):

Customers will always have humans in the loop for really complex or really important decisions. And I think even when we're at GPT-8 and you can do some really complex analysis and decision making that for the decisions that really affect people we're going to have, and I think people are actually going to be the premium in those decision making processes. So with that, I think empowering those people to have the right data and have the right context and have the right workflows and automation and orchestration and all the other fun fancy enterprise words of 2015, you actually do need all of that in order to reduce the biggest cost driver, which will be people. And so if you can completely eliminate people, I don't think you should by any means. There shouldn't be a trust and safety team that says, I'm going to completely eliminate every agent, right?

(19:51): Even generative AI companies, they have the best models, have large trust and safety teams that are reviewing content still. And the balance and challenge for these teams is how do they form their own principles? How do they say, yeah, we trust this model and we can make decisions on it, versus there has to be a person here. Something that is faced with trust and safety and a lot of industries is the coming to head with regulation and AI as well where you have digital services act coming out of the EU, for example, where they have to, certain decisions have to be made by people, and you can argue that that's really horrible and we should have every decision be made by AI. But DSA requires appeals to be appealed by a human being, which means that if a model made a decision, a user can appeal that decision and a human actually has to look at it. Is that right or wrong? I'm not going to weigh in on that, but it's definitely fascinating to see how regulation will come to play with AI. What are patterns that you've seen with AI and regulation and how enterprises are approaching this today?

Trust and safety blind spots throughout the genAI lifecycle

Sara Ittelson (21:04):

Some of this you can debate the merits of, but ultimately we have to operate within the frameworks as they exist. And so all of your, it's incumbent upon every company to sort of know what are the regulations and compliance that are applicable to them and be in line with those. So you can use AI to drive efficiency, drive preparation, but certainly would always advise all of our investments to be compliant with any regulation. So even leading AI companies have these giant trust and safety teams, and I think that maybe kind of might come as a surprise to folks throughout the lifecycle. You've got what is the data that goes into the model? You've got folks asking questions of the model, you've got the output of the model. Can you talk about how throughout the generative AI life cycle, where are there trust and safety vectors that maybe wouldn't be obvious to folks?

Glen Wise (22:06):

Yeah, they're everywhere. We're still in such early innings. I mean, generative AI has been useful for what, six months? When I say useful from a consumer standpoint, right? Consumers basically figured this out six months ago, which means that attackers probably were looking at it a little bit earlier and really smart people were looking at it very earlier than that. But all of those gaps haven't been filled yet. We still have, we're on iOS, what 17 now? And there's still zero days with iOS and we haven't even begun releasing, really releasing GPT and generative AI models and LLMs and Transformers, whatever we want to call them. 'em really, it's the broader populace and broader ecosystem where my grandmother would use it or where someone's grandmother would use it. That's not happening right now. But once it does, there's going to be who knows what the threat vector is right now where you have the ability to obviously feed it training data that's just wrong.

(23:20): You have the ability to prompt injections or give it prompts that will confuse it and make it hallucinate, make it turn against how it was really trained for, and then obviously the outputs, it can just not be accurate or just not be safe fundamentally. And so I think we're in such early innings that we're still figuring out all the harm that this can be done. And you have initiatives like, how are we going to do AI red teaming on this? And that's great, but also how are we going to build a system so you don't even have to do red teaming, so that red teaming is actually going to be red teaming instead of red teaming as it is today where it's like a 14-year-old, it's like plug something in and it's obviously horrible. And so we have a lot to go to make sure these models are safe.

(24:12): And I think enterprises are going to start seeing some of the dangers, especially of really large companies, insider threat issues, privacy issues, PII being exposed certain customer data when they shouldn't be being exposed overall, just like I want data that I don't have access to. I'm just going to ask this all knowing model. We're going to have to figure that out. And I don't even think enterprises have really fully even gotten to the place where that is an issue yet. So I think we have some steps there, but once we get there, there will be a lot of different threat factors that we don't even know of yet.

Glen’s advice to AI founders on dealing with bad actors and harm mitigation

Sara Ittelson (24:52):

What advice would you give to founders building today? They're focused on all the good ways that can be used, not all the bad ways. How can they get in the right mindset to build resilience early on and be aware of the harms that their platform could be used?

Glen Wise (25:10):

The first thing that founders need to know is that there are really horrible people out there, and I think that gets lost sometimes in the go get 'em attitude of Silicon Valley where everyone's changing the world for the better. There are people there that want us dead. There are people there that want to hurt us and our children and want to do really horrible things, whether we like it or not, that exists. And so as soon as we talked to founders of AI companies and like, no one's going to use this for harm, and then months later they'll reach back out and be like, this is being used for harm and we don't know what to do. The first thing is to know that someone's going to figure out a way to harm it. Even you create a small blog, you'll get some sort of just really annoying spam comments as the minimum, but first thing for founders is to understand that there are bad people.

(26:06): There's bad people, and they'll use it for harm and they'll figure it out. So it's honestly their job to be ahead of that and to know that these are the ways that can be used. These are the systems and tools and rules that we can put in place to make sure that doesn't happen. Because some people, these companies are building really powerful technology and they recognize that. And the question is, how do we ensure that that doesn't fall into the wrong hands or be used for illegal or nefarious ways, especially when we don't know how regulation around accountability is going to be, which is a whole other conversation. But I think, yeah, the first thing is to understand that it will be used for harm. There are bad people out there and then go from there.

Unexpected impacts and challenges of AI proliferation, and how Cinder thinks about addressing evolving customer needs

about

Sara Ittelson (26:52): Cinder is not that old. And I think you founded Cinder coming from your time at Meta talking to all these other companies, but I think at that point, AI companies weren't on the main stage yet, and now they make up some of your core customer set. Did that surprise you? How has the proliferation of entrepreneurship in the AI space changed your customer set?

Glen Wise (27:22): What has surprised me the most is the speed of development of these transformers, of these large language models. If you were to ask probably anyone, even like Sam Altman I bet, but if you were to ask anyone two years ago, these models be used in a consumer manner, everyone would say no. And so we built Cinder again to solve a real workflow challenge that we were seeing at the time and a workflow challenge that we don't believe will really be disrupted around AI. I think we can use AI in our workflows to disrupt the current wave that they're doing business today, but these problems are still going to exist. Five years ago, we were talking about the creator economy, and now sometimes that's mentioned, but it's really about, oh, generative AI. I mean, that's if anything like an offshoot of the creator economy where now enterprises, if you look at, again, let's use Salesforce an example, building all these generative AI tools on top of it, that's content that's being created.

(28:27): When content is created, there will be harms involved, there will be issues and worries and challenges. And so how do you stop that? How do you make sure that, okay, our system can be trusted, our system is safe, our system can be compliant to some regulation that comes out. And so for us, that's great because in order to do that work, it's a lot of the same workflow that you were using to do standard content moderation or skilled review or anything else. And so it's been awesome. I wish I could say, oh, we saw generative AI being this really big issue two years ago. We built this workflow tool. In reality, we were solving a real problem that was occurring, and it happened that more problems came out that were really similar to it. And we talk to customers all the time that they're using Cinder to solve a specific workflow use case, let's say, for trust and safety.


(29:23): And they'll come to us and say, oh, our fraud team feels the same exact way and they use Cinder. We're like, sure, that works. And it's really for us focusing on solving a genuine real pain point outside of everything else. If we're listening to our customers and we're solving a real pain point, nothing else really matters, and there's going to be more pain. There's going to be, as we learn more about customers and as we get deeper embedded within our customers, we're going to be exposed to more of their pain and we can build more and we can take more of that share of ownership of those problems. And that to me, and whether it be AI or some kind of more boring workflow, we don't really care. We just want to build something that genuinely helps our customers. So that's been our approach so far, and we'll see how it turns out. We're still writing the book here.

Assessing the range of threat vectors, and how Glen thinks about potential for harm

Sara Ittelson (30:17):

Are there threats? You must have seen so many different kinds of trust and safety issues while at Meta, right? You've got not just the core Facebook platform, but you've got Marketplace, Instagram, WhatsApp. I'm curious, you saw a lot of different threats. Are there any, we've talked some about where AI can make executing that threat faster or easier, maybe the sort of nation state disinformation campaign, a good example of that, right? That's something you directly dealt with while at Meta that now with AI is just easier to deploy. I'm curious, how much do you think the threats are in that camp of something we used to deal with and now it's just gotten bigger and faster to keep up with or more difficult to keep up with versus entirely novel threats? What are the sort of novel harms that AI might enable that we maybe don't have as much history or preparation, identifying out in the wild and sort of surfacing and addressing?

Glen Wise (31:28):

The thing to always think about is what is this actor's motivation? When you're examining a threat actor, what is their motivation? 99% of the time it's going to be financial motivation. They're doing some sort of fraud, they're doing some sort of scheme to make more money, or the 1% is intelligence gathering and intelligence operations outside of that, sure, maybe someone's kind of doing it for fun or they want to hurt someone or something like that, but chances are the motivation is going to be one of those things. So from there, I think we saw new threat vectors emerge all the time. For every new product that's created, you have to think, how is this going to be used for harm? Facebook dating shows up, it's like, how is this going to be used for harm? But if you have a novel surface, if you look at Snapchat stories, for example, Snapchat stories came out, they were available to typically people under the age of 18 using them.

(32:28): How is that going to be used for harm? Now it's pretty obvious how they can be used for harm, but at the time the answer is like, who knows? I'll just put it out there. And so there's this move fast and break things mentality that can also result in harm. And I think it's the way that I would think about that question is how does this novel product offer a new vector for someone to conduct some sort of harm, right? Conduct some sort of financially motivated attack. And so when I think of novel threat vectors, I really just think of how is it the vector itself? How is it coming through? So with generative AI, that's a whole new way to bring things in that we haven't even seen yet. It's just like another product that can come through. The people working the front lines of stopping this, they're used to novel threats and having to adapt and be relentless against these harms. And AI is just another step in that. I think.

AI’s potential to reduce the human burden of trust and safety work

Sara Ittelson (33:27):

One of the things that really struck me about you and your team was the empathy for the people that do this work to sit and look at really harmful work and to evaluate it and assess it is psychologically, emotionally taxing in a lot of cases. And I think one of the exciting things about AI is the potential to reduce some of that human cost. So we talked a lot about how the humans will still need to be in the loop, but maybe you can talk for a moment about how AI can maybe be applied to make it a more sustainable role for folks who take on this really critical and challenging role of making the internet safer for us, all

Glen Wise (34:15):

The frontline workers in trust and safety are treated horribly today, and they have been because they're in a different country, they're looking at really horrible content, and it's always kind of been the pattern. If you're look, in the past, it's been a pattern of let's just send them and they'll deal with it, and we don't have to deal with the effects. Something that we're really interested in is how can we actually build a platform for them? And you have these amazing consumer experiences on the front side. Every consumer app that you have, every marketplace that you use, you can get some sort of thingymijigs delivered to you same day. That's awesome. Meanwhile, the people reviewing this content are doing so they're forced to look at some high resolution image of this really horrible thing or watch this really horrible video. They don't have the tools to support that.

(35:11): Even there's studies that show just reversing the image, making it black and white and blurring, it does so much in terms of PTSD and how much that impacts you, nevertheless, not showing to them at all. What if you can describe the image? What if you can annotate a video or annotate, transcribe a video or transcribe sound, or describe an image that can reduce it so much, or again, make pre-configured decisions on behalf of a reviewer or only send it to a reviewer when it's a really large network or it gets really complex and a model can't handle it. There's going to be so many opportunities to improve the reviewer experience, and we would call it really assisted review is a term that we use a lot internally because we want to assist the human reviewers doing this, and we don't want to not have these human reviewers to really integral to trust and safety operations. It's just how can we provide them just tools that make them work faster, that make them more accurate, and that improves their quality of life.

Sara Ittelson (36:20):

Glenn, this has been super interesting, so informative. I think you come at this AI discussion from such a unique vantage point of both employing AI within your product, leveraging AI that your customers are using, having major customers be some of the leading AI companies around. It's really, you have such a unique perspective on this whole ecosystem. I just really appreciate you taking the time to share some of these insights with our listeners. It's been incredibly fun.

Glen Wise (36:54):

Thanks for having me, Sarah. This has been great.

Meet your host

Partner
Focus
Cloud/SaaS, Consumer, Marketplaces, AI

Sara Ittelson

Sara Ittelson is a Partner at Accel. She focuses on early-stage consumer and enterprise companies. Sara received her MBA from Stanford.
Read more on Accel.com
Season 1 of the Spotlight On podcast, by Accel
Security

Security

Early Stage

Early Stage