Transcend’s Kate Parker on putting data back into the hands of users in an AI-driven world
Recent developments in artificial intelligence have sparked an outcry for control over personal data. While regulators, politicians, and the business community have been thinking about how to improve data privacy, there is still much more work to do. Kate Parker, Transcend’s COO, will discuss the progress in data governance, how companies can adopt and build secure AI servicese, and why it is critical to give power back to the user.
Questions around data and privacy will only become more important and complex as AI evolves. Since 2017, Transcend’s privacy platform has been used by major enterprises like Robinhood and Patreon to answer questions about their data: What data do we have? Where is it going? Who has access to it? With user demand for control rising, it isn’t just about ticking a compliance box—it’s the key to a company’s survival in the future.
“Personal data within most companies goes off like a confetti gun. It gets into every SaaS system, every data warehouse. You have to pull the confetti back together and hand it back to the user. ” – Kate Parker
Efforts in Europe, such as GDPR and the AI Act, along with California’s CCPA, represent tangible steps toward putting guardrails in place. On this episode of Spotlight On: AI, Kate Parker, and Vas Natarajan will discuss the latest in data governance, and share how to enhance your company’s privacy posture to keep user data safe in our AI-driven world. Conversation highlights:
- 00:00 - Overview of today's data governance landscape
- 02:10 - The founding story behind Transcend’s privacy platform
- 04:57 - Exploring the link between customer privacy and brand value
- 06:36 - What privacy issues and threats arise from an unstructured approach to personal data in the AI landscape
- 10:30 - Common pain points enterprises face in meeting privacy standards and governing the output of LLMs
- 14:24 - What can be learned from Fortune 100 companies navigating safety personal data in AI vs the proactive approach new startups are taking
- 24:20 - Current AI regulations and signals for future changes
- 32:56 - Transcend’s internal AI deployment across sales, operations, and content
- 36:30 - Anticipating the future of personal data usage
Learn more about Accel’s relationship with Transcend:
Explore more episodes in this season:
- Episode 01: AssemblyAI's Dylan Fox on building an AI company during a period of radical change
- Episode 02: Roblox’s Daniel Sturman on Building Great Teams in the AI Era
- Episode 03: Ada’s Mike Murchison on how AI is revolutionizing customer service
- Episode 04: Merge’s Shensi Ding on powering the next generation of AI SaaS companies
- Episode 05: Scale AI’s Alexandr Wang on the most powerful technological advancement of our time
- Episode 06: Bard’s Jack Krawczyk on the birth of Google’s AI chatbot and the creative potential that lies ahead
- Episode 07: Synthesia’s Victor Riparbelli on creating an environment to harness AI benefits and reduce harms
- Episode 08: Ironclad’s Cai GoGwilt on a decade of anticipating the transformative power of AI
- Episode 09: Checkr’s Daniel Yanisse on tackling bias in people and AI
- Episode 10: Cinder’s Glen Wise on trust and safety threats and holding AI accountable
- Episode 11: Transcend’s Kate Parker on putting data back into the hands of users in an AI-driven world
- Episode 12: Arm’s Rene Haas on building the brain of artificial intelligence
Vas Natarajan (00:13):
Welcome to Spotlight on, I'm Vas Natarajan, and I'm here with the amazing Kate Parker, who is our COO at Transcend.
Kate Parker (00:20):
Thank you for having me. I'm excited to be here.
Vas Natarajan (00:21):
Yeah, absolutely. This is going to be a lot of fun. Yeah, Kate, a lot of what we're talking about this season is AI. AI is top of mind for all of our portfolio companies, and we'll talk a little bit about how AI intersects Transcend it's top of mind for the CIO universe. I mean, every CIO that we're talking to right now is incorporating some AI into their strategy. They're looking at vendors, they're trying to figure out how to mitigate risk. I know that's really top of mind for us at Transcend, and it's top of mind for regulators. It's top of mind for politicians. I think it's AI has the ability to reorganize society in ways that maybe we can't even predict right now. And so from GDPR to CCPA to the AI Act now brewing in Europe, regulators, politicians are all thinking about how do we put some level of control around AI so that it's a productive technology for society. And we've had some amazing guests this season and we've been talking about AI through the vantage of how it can be put to good use, but at the same time, it's worth thinking about what those systems of control need to look like. And I know that's been really topical for us at Transcend.
Kate Parker (01:37):
Mean, I think you hit it on the head. Exactly. I think what's been super interesting for us with our companies that we serve is that many of the risks that exist with AI have already existed in the business industry for many years. So things like processing personal data, having the right type of risk assessment. So it's interesting to watch companies sort of go through this classic version of their risk assessments and then this sort of additional component as it relates to the generative outputs, the inputs and outputs of the model. And putting those two things together has been super interesting for companies. So I'm excited to dig into it.
Vas Natarajan (02:10):
Absolutely. And I've already put the car before the horse, and so maybe we can back up and just talk about what is Transcend. Yeah. And how did you guys get started? I think Transcend is a fascinating origin story because it started from a personal pain point from our founders, Ben and Mike. And maybe if you don't mind walking us through just that origin story.
Transcend’s Origin Story
Kate Parker (02:29):
Yeah, absolutely. I mean, it blew me away when I met Ben and Mike. So they are two computer scientists out of Harvard who went through a personal journey to try to figure out their own data. They were trying to hack sort of their efficiency, their time productivity, what music they listened to, how many classes they were going to, all of that stuff. They wanted all of that data.
Vas Natarajan (02:48):
Yes, I remember they were using Spotify and Whoop and all the different quantified self products back then. And it was so hard for them to get data out of those systems.
Kate Parker (02:58):
They couldn't get the data out. That's totally wild. And that was right at the time that Europe's GDPR came into effect. So the data privacy regulation that said, you have to be able to give your consumers a way of taking back their data. And Ben and Mike read that and they thought, wow, this is a really interesting moment for the business community to be thinking about the ways in which they have to hand back user data to the people who actually created those data in the first place. And so they basically called the field very early and said, there's going to be two ways that businesses approach this. There's going to be a very CYA approach that's focused on policies and banners and mitigating the risk ahead of them. And then there's going to be a deeper, longer term approach having to do with actually regulating the data at the code layer.
(03:44):And so Ben and Mike, as two computer scientists said, well, that's the more interesting path, particularly if you look downfield, which they already were in terms of artificial intelligence and things like that of just how valuable our personal data is. And they decided that if they could make it really easy for companies to actually handle and do that infrastructure component at the data layer, then that would be the right output, not only for companies, but ultimately for consumers themselves. So a fascinating way into it, and it's coming full circle for them in a lot of big moments, which has been really cool to see.
Vas Natarajan (04:16):
Who were some of the first customers that started to use Transcend?
Kate Parker (04:18):
So Robinhood, Patreon as an example. So folks who have just incredible user bases, really engaged consumers and wanting to be able to have that level of brand trust, whether it's the creators and the instance of Patreon or the actual users who are doing financial transactions with Robinhood. And so a lot of our core and early customer success was with those incredibly popular consumer apps where there was a lot of daily interaction happening and they wanted to be sure that they could handle the data privacy components at the code layer in a really constructive, efficient and compliant way. And so that was the first runway stage for Ben and Mike as they were building.
Vas Natarajan (04:57):
Yeah, I'll always remember some of the early feedback from Robinhood, Patreon, some of those early consumer services companies. It was fascinating to watch them view privacy as a core tenant of their brand values, the product that they wanted to build and serve to customers needed to have privacy as a first pillar, as a first class citizen. And they viewed that both as a benefit from a CAC standpoint. So hey, it's just going to be easier for us to acquire users if people trust our brand, but also from just lifetime value standpoint, people will be more likely to engage with our service, contribute to our service. In the case of Robinhood, if they know that we are honoring their trade data, they'll be more willing to put money into their accounts and trade more frequently, and that's going to drive lifetime value. And so I think that's the other thing. You had some of these Vanguard brands back in 2019, 2020 that were really writing the early book of what privacy was going to mean to building digital services broadly. And I think they all realize, Hey, we'll build bigger and more valuable companies if we encode this at a product level.
Kate Parker (06:05):
I think that's exactly right. I mean, when we fast forward to today, we now serve not only the consumer apps, we serve Fortune 100 companies. And what we're seeing in terms of value proposition is really it's number one strictest compliance companies, particularly our largest enterprise companies that are just saying, we need to make sure that we have these dials turned as tightly as we can. You can only go so far when you're using policies and sort of written statements. You've got to get in at the code layer and just tighten those dials in terms of how you're handling compliance.
Vas Natarajan (06:36):
Kate, actually unpack that for a second because, so we've talked about how it's a big problem. We've talked about how we know it's going to be a brand value for folks. For those that wanted to implement privacy or encode it at both a product level and at a company level, how were they doing it in the early days?
Kate Parker (06:55):
Yeah, so in the early days it was a lot of web forms and shoulder tapping. So they would basically set up a system within their company to say, when a person comes and requests their data, I'm going to need you, John, you Sally, you sue, whoever else to handle all of these things. I'm going to need you to pull the data from your systems. If you're the marketing team, I need you to go in and scrounge them up and find them. We're going to put all of that data back together. We like saying that personal data within most companies goes off like a confetti gun. It just goes into every SaaS system, every data warehouse, and then you got to pull all that confetti back together and hand it back to the user.
Vas Natarajan (07:34):
Yeah, absolutely. So I sign up at Robinhood, their email service provider has my data, their just core production databases have my data, their re-engagement tools, have my data. Vas is known across many different SaaS tools inside of Robinhood. And to govern that, as you're saying, that's the confetti gun that needs to be controlled in some ways.
Kate Parker (07:57):
That's exactly right. And for any company, it's that confetti gun just in terms of the operations, the way companies are run, the SaaS systems at different organizations are using. And so being able to pull that all together traditionally was done through a lot of shoulder tapping. And so now with transcendent and infrastructure components, being able to do that at the code layer, mapping all your data systems, knowing where all your content is, being able to pull all of that back into a single unified view, and then actually being able to govern that. Our central moment has already always been about making it easy for companies to execute these tasks, these data processing tasks. And as we look to things like artificial intelligence, a lot of our customers are pulling us in because many of the same things hold true at the governance level. We talk about looking out for artificial intelligence.
(08:48): And don't get me wrong, there's a lot that needs to be figured out on the regulatory field of where some of this will go. But AI is already regulated. We've got 13 US privacy comprehensive pieces of legislation that govern where you can put sensitive data. We've got Europe's GDPR, which already focuses on looking at automated decision making, having the right risk assessment set up, processing the data components. So for many companies, it's just a matter of getting that foundational pieces ready to go because AI already has quite a bit of regulation that needs to kind of be covered off for businesses.
Vas Natarajan (09:26):
Yeah, absolutely. Yeah. So I mean this is what I love about Transcend is you have companies where the confetti gun has gone off, they need to almost put the confetti gun back into the cannon in some ways. And so Transcend gives them that single choke point such that we know where the data is. And regardless of where your end customer is, if they're in California, if they're in Europe, if they're in Japan, if they're in Brazil, if they're in India, any of these jurisdictions where there are going to be unique bespoke pieces of regulation, Hey, I can orchestrate that data. I can govern that data based on where my user is. And so Vas is logging in to Robinhood from India while India has this point of view on how you should use this data. Okay, now I have Transcend that gives me both that single choke point, but also a single on-ramp to apply policy to Vas’ data.
(10:18): And that's a lot of the power of it. And I think to your point, the whole point of this season is to talk about AI in some ways how AI will be regulated. Maybe to replay what you were saying is it's almost going to be a natural evolution of how data has been regulated over the last five to 10 years, GDPR, CCPA, and all of these privacy regulation acts we're just the on-ramp to how AI is ultimately going to be governed. And we're starting to see that take shape now. And I'm curious, what are we hearing from customers in this vein? What are the big pain points that they're bringing up that maybe are new and novel in 2022 post chatGPT versus what they saw, what saw the year before?
The AI Landscape, Then and Now
Kate Parker (11:04):
Yeah, absolutely. So for many of our customers, they came to us very proactively this year and said, we have new plans for generative AI. In particular, you are our trusted data governance infrastructure. We need your help actually managing the data flowing into the LLMs and flowing out of the LLMs. So as an example, chief information officers, a lot of folks are launching productivity tools internally for their employee base so that they can access information more quickly, whether that's your customer service agent looking to quickly pull down a call script or take some type of action. But when you think about the LLM, you need to actually be very careful of what information is going into that LLM and then what information that internal employee can actually take out of it. So in many ways, for us, that's the core infrastructure problem that we've always been focused on is what is the data, where is it going, who has access to it, what should be done with it, and how should it flow throughout the organization? And so a lot of those conversations have been incredibly interesting because people are deeply concerned, I think, for the right reasons about making sure that the right data is going in and the right data is coming out.
Vas Natarajan (12:16):
Look, I would say when I first put my photo into DALL-E, I was wondering if someone comes in behind me and prompts DALL-E or Midjourney for okay looking Indian, dude, I don't want my photo to come up. And so this is something just this concept of name and likeness governance. And I think to your point, we're submitting a lot, whether we know it or not, we're submitting a lot of personal data into these systems and those systems are reading that data and they're creating probabilistic outputs that might approximate the data that we're submitting. And that's something that I think we should all be really paying attention to. And I know companies are starting to think about this, but in particular end users are thinking about this. And that's always the groundswell that is really going to activate this category for us is dear customers really care about this. And I think that's absolutely yes.
Kate Parker (13:08):
Oh, for sure. I mean, we serve one of arguably the largest AI companies in the world, and for them, they wanted to be sure that their users understood all of the general and existing regulation on the books in terms of deleting their data, accessing their data. And so being able to have that infrastructure run at scale, you can just imagine all of the end user attention on them and being able to handle that. And I think that that particularly for AI companies is a very real thing. We talk to companies all the time that say, we have a great consumer facing product. We plan on using AI, we need to make sure that all of the regulatory screws as it relates to them feeling ownership over their data and the ability to reclaim it, which they do have the rights to do, is sort of their set up ready to go, kind of ready to rock. They know it will be a brand component for them. And so I think that's very particular thing for AI companies that they're feeling that tip of the spear in the same way that a lot of consumer app companies felt that right after Cambridge Analytica and sort of that more spotlight attention. But for AI, I think most of our family members also feel that people are a little nervous and they just want to know that those controls are in place for the regulations that are already on the books.
Vas Natarajan (14:24):
Yeah, absolutely. Well, for the audience that are building companies that are AI native, trying to serve some of these enterprise use cases, so you're talking a lot about the pipeline that's coming inbound to us and they're advocating for these issues. If you were to just step back and think about AI being implemented in an enterprise context, where do you think we are in the maturity curve and what do you think needs to take shape in order for vendors in the space, maybe fellow Accel portfolio companies? What do you think needs to take hold for that market to be true active engaged pipeline that we can sell against?
Kate Parker (15:00):
Yeah, absolutely. I think it's very specific to generative AI. So I'm going to kind of put kind of classic AI to the side. I think there are a lot of enterprise companies that have been doing classic AI for years, but on the generative front, what we see and who we talk to in the Fortune 100, there's a ton of excitement to do something. They have product plans, whether it's chat bots that look at product recommendations and how they can speed that up, sort of reallocating the efficiency of their customer success team to make that more of chatbot base as opposed to having larger and distributed teams. So they have a lot of ideas, but they're incredibly nervous about the lack of brakes and guardrails, and that is causing them to just take a beat to figure out how are we going to handle some of those thorny issues.
(15:49): And we sit in rooms with c-suite folks who are really trying to pull that apart. And so if you just go down to the granular level, it's how do we make sure we don't put the wrong data into the model and how do we make sure, particularly if it's an end user product, that model doesn't spit out an output that we don't want it to. And that kind of to me is really interesting because it actually, it ranges from trust and safety concerns. How do we make sure that the output is brand appropriate, is something that has the right ethics. It sort of evolves to our brand morals and our brand values. So all the way on the trust and safety side, but then obviously the privacy concerns, IP concerns, just sort of these very standard concerns. How do we make sure that our chat bot doesn't leak sensitive data out to an end user? How do we make sure our chat bot doesn't put out information that is actually just at its standard core, not something that it should be doing?
AI and Privacy
Vas Natarajan (16:46):
I think what's so fascinating about our space is when I think about trust and safety, privacy, governance, all these interchangeable terms, these were concepts that a lot of our companies were layering on after the fact. You take a Robinhood or GoFundMe or Patreon, it's like, Hey, let's go build a big service. Let's get a bunch of users and then we'll figure some of this stuff after. We'll figure out some of these things after the fact. But now as they're thinking about AI, it has to be foundational to them. Exactly. It's almost like compute. We can't even start to build these services without making sure that we have our privacy posture in place. And just seeing that activate inside of our pipeline, hearing that qualitatively among our customers, that's super exciting. Privacy is the way people are going to build products in the future. It's not just the thing that you layer on after the fact.
Kate Parker (17:34):
I totally agree. I think that has been such a transformational moment for the industry. I mean, if we just look back three years ago, there were very few startups who were coming to us below the regulation thresholds and saying, I want to get ahead of privacy. That was a small cohort. Maybe they were doing something very sensitive on health data or they had a very specific focus for their company, but by and large folks were trying to figure it out after the fact, after they got big enough, after they started to need to be regulated for AI companies. It has been completely different. We are meeting with folks daily who say, we are just getting going. This is our mission on AI governance. We believe that we need to have this stuff handled because on two fronts, if they're consumer, they're not going to get the trust of consumers using their products.
(18:24): And if they are business focused, they're not going to be able to close those contracts because the businesses that they're looking to actually interact with and close contracts with also want to know where their data's going. They just don't want employees inputting stuff. And so it's got this really interesting business tension to it that I think adds another bit of richness on top of the user experience and on top of just the fundamental data rights that people have. It's also like businesses have a little bit more skin in the game right now, which has just been really interesting to see.
Advice for Founders in the AI Age
Vas Natarajan (18:56):
So we have a lot of listeners that are, as I said, AI native founders and they're trying to get traction with new AI oriented products. And what would you recommend to them as they're approaching enterprises to adopt new services? Obviously it sounds like, hey, you need to have some privacy security governance posture out of the gate, but how should they find budget? How did they find the champions for their products as a go-to-market leader inside of Transcend? Are there any tactical nuances that you've noticed in this category that others might benefit from?
Kate Parker (19:32):
I think the biggest thing is being able to show exactly what the value is. There is a lot of hype in the AI industry. There's a lot of excitement about different use cases, and I think even just sort of drawing on our own journey, being able to demonstrate very clearly not only what you're saying, but what you can actually do is incredibly important. People are trying to cut the wheat from the chaff very quickly. And so being able to pull that apart I think has been super interesting. I mean, it's interesting for me, you obviously talk to a lot of different startups and sort of get a pretty big view of how folks are using AIai. I'd sort of be curious about your take on where you think risk is flowing into that and how they're actually thinking about that in terms of going after enterprise customers. Do you think they know that it exists yet or do you think they still are in that learning curve?
Vas Natarajan (20:27):
I love that question and it's something I think about a lot. We probably meet, I dunno, a couple thousand startups every year. The vast majority of them today have some AI native component. And I think there's an interesting tension for how AI companies are pitching themselves right now. One is they're trying to sell efficiency. They're saying, Hey, we can do so much more work per unit of input than you could have done before. A lot of what that implies is they're actually taking humans out of the loop. You might have a customer success team or a customer support team of 50 and hey, by being AI enabled or implementing our product, maybe you can get the same output with only a team of 10 or 15 products in this space historically have been priced on a per seat basis. And so you have all of these companies that grew up in the SaaS era of wanting to sell more and more seats, but now you have a product that is arguably taking seats out of the equation. So you have a lot of companies that are just having to reimagine pricing and packaging and what that ROI story is how they connect to some value access where you can get in, you can get your foot in the door, but then you can hopefully extract more economics from these customers over time.
Advice for Investors in the AI Age
Kate Parker (21:44):
It's fascinating to me just how we're seeing all of these different pieces being put on its head in a bunch of different ways. So as you think about founders navigating that, I'm obviously not a founder myself, I just, it's sort of a curious thing of shedding some things that maybe the industry already thinks about, adopting some new mental models. As you go forward, how are you thinking about giving founders the right tools to enter into the AI space? Is there anything that we can take away from that as well that you think's interesting?
Vas Natarajan (22:18):
I appreciate you turning this interview around. Touche.
Kate Parker (22:22):
Always got to be learning something.
Vas Natarajan (22:23):
Love it. I love it. I think the thing that we can do as investors is just not be prescriptive because I think we're in a new world and these technologies are implying entirely new business models. They're implying entirely new ways of company building. I think they're going to imply entirely new ways of hiring and firing too. We have a bunch of seed and series A companies that have come to us and said, Hey, we're using GitHub copilot and we're getting 25% or 30% efficiency gains just through these copilot services. In some ways it implies like, wow, do you really need to scale your engineering team from 10 to a hundred if you can get to 10 to 75 with the same amount of output, sort of what we're talking about before of just the leverage that AI is giving to the labor force. So we're figuring it out as we go, but I think the thing that I've leaned on is just trying to be more curious than prescriptive.
(23:27): There are so many mental models of building and scaling SaaS companies that I would say investors have relied upon for the past 10 to 20 years. Those models are going to be very much broken by what we're witnessing right now. And so we got to go back to first principles and really think from the ground up, okay, how do we build and scale these companies? How do we go to market? How do we package in price? How do we hire, how are we going to fundraise behind these companies going forward? All of that is going to be, I think is going to be turned on its head. And I think bringing you back to transcend, I think the thing that I'm really preaching to the companies I work with is security, privacy, governance, trust, safety. These have to be first class principles, hundred percent. Not just from a marketing standpoint, but really how we build and scale these products because I think this concept of personal and data security has jumped the ship.
Vas Natarajan (24:20):
And it's become a consumer and end user expectation that every company is going to have to build against. So Kate, let's get into the brass tacks around what privacy means in an AI era. Yeah. Where are we from a regulatory standpoint, where do you think we're going and what do you think that means for company builders?
AI Regulation and Governance
Kate Parker (24:40):
Yeah, I mean it's fascinating. So where we are currently, AI is regulated. There are existing regulations on the books in the US we have 13 states with comprehensive privacy regulations. Obviously, Europe has been the high watermark for years with things like GDPR and that obviously includes needing to do personal processing protections, giving people the rights to their data to delete it, to access it. GDPR is interesting because it's also included for many years, automated decision-making, which is AI. And so being able to handle the risk assessments and looking at that. So that's just the regulation that already exists. You layer on top of that in the us, things like the FTC and the FCC, they have already signaled quite clearly that they believe they have everything they need in order to make sure that regulation around fraudulent practices, deceptive marketing, making sure that companies are really responsible for the AI outputs, that they're not misleading consumers.
(25:40): That's already well within the governance structure of the FTC and FCCS folks are already using. And when I mean folks, I mean regulators are already using the tools at their disposal. California for example, has a privacy regulation act. They're already sort of turning the screws on that to make sure that they have things like opting out of automated decision-making, which would be AI related specificity. And they want to be sure that they're not only using what they have today already on the books, but they've also signaled that they want to go farther. And so most of us now from the governance perspective, we're watching Europe very similar to GDPR Europe is signaling that they will likely do some type of broad AI act. It is very likely that it will be a risk-based regulation, which means that they will likely categorize businesses into risk frameworks and then have a sliding scale the top of the scale.
(26:37): Those businesses will be banned because they will be considered to be too detrimental for the risk. And then all the way down, there'll sort of be a sliding scale of regulation depending on the impact. So as we think about businesses today, it's not only just getting sort of the rank and file stuff ready to go for this new world, making sure you have your data privacy regulation ready for it to scale, ready for it to be efficient not of a back office thing, but really something where the screws are tightened. And then you also need to be thinking about where you fit into this risk framework. Are you ready to say where your company falls and its usage of AI on that sort of scale of potential harm? Because that's sort of the signals that we're getting of where the world is moving toward. So I think it's fundamental for companies right now to be thinking about and getting ahead of, I will be the first to admit that privacy regulation for many years was a complete paper tiger.
(27:31): If you were not a big tech company, you were just hiding underneath sort of the standard of the industry and just saying, we don't really have everything well put together, but the only people the regulators are going after are the folks with the huge big tech logos, Google, Facebook, those are the folks who are just getting hammered with vines, hundreds of millions of dollars of fines. What we are seeing now, and I think to your earlier point, I do think AI has sort of brought on another wave of this enthusiasm and it's consumer driven for sure. People are freaked out about how their data is going to be used. We are watching regulators really get much more specific with folks. We had a healthcare system out of Chicago as an example, that was fine, 12 million for a pixel tracking issue where they were actually passing data to their ad tech network even after folks had consented out.
(28:23): And so that's what they were alleged. There was a settlement resulting in $12 million across the hospital system. So it's certainly becoming a far more aggressive regulatory environment. The fines are certainly increasing. The exposure across industries is increasing. It's no longer a problem with big tech. California in particular sort of signaled we are going to make sure it is clear that companies need to get this handled. And what's been really fascinating for us, we haven't really talked about our core products, but one of the things that we do is on the tracking technology front to make sure if you've consented into something, so if you go to a company and you say, I do not want you to share or sell my data downstream to the Facebooks of the world, the Googles of the world, you actually expect that processing sort of flow to actually happen.
(29:14): And for many companies, that's challenging if they don't have the right type of infrastructure product. And for us, we actually have an auditor, a program that goes in and just looks at websites to see if those pixels are actually firing to see if. And so we already have a view of what a company will get fined. And regulators know this too. This stuff is now discoverable. This stuff is now obvious in many capacities. And I think for businesses are now starting to wake up to this notion that it's time to get this stuff handled. And I think that for many of those companies, they're realizing, okay, we're not going to be able to just escape because we're not Google. The regulators are like, okay, we've signaled to Google and Facebook these huge vines, now we're going to make it clear to the rest of the industry.
(30:03): And our belief is they're just going to keep tightening. They have made very clear the spirit of what they want, which is individuals in control of their data and data being handled appropriately within companies. And from every indication we're seeing in regulated markets, they just keep updating the regulations to get closer and closer to that end state. One use case that we haven't talked enough about, but which is so fun for us is when the governance team, so whether that's the lawyer or the head of privacy, when they write a PRD and go to the engineering team and say, I think I need you to build something like this, and those engineers say, I'm pretty sure that somebody has figured out a way to have this level of infrastructure and then they call us. Those are our most favorite opportunities and customers because ultimately what we know to be true is that if a legal person is writing something in terms of a request that they have for the way that the product works, they are going to come back in six months and have another request because these laws keep coming.
(31:08): They keep changing. Don't think it's a one and done. You just set up your script and you're handled the amount of times we talk to folks who say, we try to build this a year ago because we thought it was a discreet request, that the legal team would then leave us alone. And they realize that this is an ongoing screw tightening of data governance forever. This is, we live in such an increasingly complex data world. This is not slowing down. We believe that in the next few years, a third of the global population will have data rights. This is coming. And so for those developers and the folks who raise their hand and kind of get transcend really magical, then they're just like, oh, this is all the product that I was being asked to handle anyway. Whether it's the deletion scripts or the firewalls or making sure the data is appropriately classified so they know where the sensitive data is. Engineers would rather be building such more core things and having to worry about data governance.
Vas Natarajan (32:10):
I mean, it's the future-proofing that we provide customers that is a really cool element of this. I mean, imagine all the world's regulatory frameworks condensed into one piece of computational logic, powered by Transcend, delivered by an API, so that regardless of where your end customers are and what laws govern their data, you are absolutely up to date because you can pull our API, we've already updated that logic. You know how to govern that data in that moment. And oh, by the way, if anything changes, we got your back. Exactly. And that's to your point, no one wants to spend their time thinking about that. You just want to build.
Kate Parker (32:52):
We do, but yeah, other people don't. Yeah, we nerd out on that, but yeah.
How Transcend is Integrating AI
Vas Natarajan (32:56):
Okay. Let's pivot a bit and talk about how maybe AI is being used inside of Transcend. And we were talking earlier about things like GitHub copilot and the leverage that affords engineers that are able to do so much more with so much less. How are we using AI at Transcend? I've heard of multiple use cases around, I dunno, generating marketing copy or sales collateral or I'm sure some of our engineers are using things like copilot. Are you seeing the impact in our operations? Yeah,
Kate Parker (33:30):
Absolutely. We are adapters. I mean, I think in terms of culture, I talked to other tech companies and they're either right now, they're either in or out. We are in, for sure. Culturally, we believe that this is going to have a huge impact on the way that we operate. We believe that there's efficiencies to be had. We believe that we can help create a better culture of getting things done. We hunt for those monotonous tasks that nobody wants to do, and we figure out how AI can be applied to it. So it's all over the map. And every organization leader has been asked to think about how they incorporate AI into their work processes. We've also just done a lot of stuff to just enable the entire business. So as an example, we built a proprietary privacy GPT, which basically summarizes all of the privacy regulations that exist in the world.
(34:22): These things are incredibly complex. Wow. I do not have a law degree, so I really need help kind of thinking this through and not to suggest that it can provide the full answer so that you would sort of take it as gospel, but certainly provides a leg up of being able to pick apart the different regulations, how they overlap, where they overlap and arming every single person at the company with this tool means that our content strategist and our outbound business development team have the same access to information as our head of privacy as it relates to the regulations and understanding the field. And that's been a game changer for us just in terms of education. Things like copilot are all things that we're using in terms of our approach and our perspective. But yeah, we've been doing a lot in part because we also are building a lot of the governance tools. And so it's been really fun to do. When we talk about middleware as an example, we have the middleware set up on our privacy GPT now that means that we see everything that goes into privacy, GPT and everything that comes back out. And so it's been fun to have those tools working simultaneously together within the company, but we're big believers.
Vas Natarajan (35:30):
Yeah, that's really interesting. I think one of the best applications of LLMs or Gen AI has been around just simple information recall. Why are people likening companies like OpenAI and Anthropic to Google? It's because so many of the use cases right now are just about search. It's information recall. It's how do we get the right piece of information in my fingertips in a very consumable way. I'm not digging through, in the case of privacy, GPT, I'm not digging through all of this legalese and these 200 pages of regulatory frameworks that are being published out on these random websites. I'm just getting the right piece of information at the right time. It's something I can almost immediately implement in my day-to-day business. And yeah, I think to your point, I can imagine all of our sales reps and our SDRs and our content strategists, they're getting so much leverage from that.
Forecasting the Future of AI Innovation
(36:27): They're eliminating all the CRT of having to wade through the density of data and they're actually just getting to the answer. And maybe even sometimes they're getting the content structured in a way that they can just rinse and repeat and almost inject it right into a piece of collateral for us. So okay, with all this in mind, we've talked a lot about privacy as it relates to AI and the need for safeguards so that end users can trust all of the great AI innovation that's due to come. What are you most excited by what's coming out in the next year, three years, five years, that gets you really excited about how AI will impact either humanity or company building or even you personally?
Kate Parker (37:11):
I think the thing that I'm most excited about is the impact that AI is going to have on the importance of personal data and the way that businesses are using personal data to fuel their operations, to service their customers, to make sure that they're providing a very valuable experience out in the world. And I think when I go back to the mission of Transcend, it's all about making it easy for businesses to do that, to manage their personal data and to provide and users with great experiences to meet the spirit and the letter of the regulations that exist. And when we look to the future, we see a lot more of this coming. So I think my kind of takeaway and my big piece here is that this is just sort of the beginning of the crest of the wave as we think about the impact of data, the importance of data and the underpinnings of the way that this is going to be continued to be regulated, and we're thrilled and excited to watch companies kind of take that turn and realize that they want to really get their stuff right because they recognize that there's only so much value that they can have in going fast.
(38:24): If they don't have the right breaks, it's totally useless. And so that's what we're most excited about.
Vas Natarajan (38:30):
Yeah. No, I love that. I think when I think about you guys and the position that you're in, we get a chance to go to customers and say, put the brakes in place now. Make this a foundational part of your application stack and then run wild.
Kate Parker (38:46):
Vas Natarajan (38:47):
You are going to get to implement so many different cool things, knowing in the background that you already have your safeguards in place to layer that on after the fact, I think in some ways is going to stifle innovation.
Kate Parker (38:58):
Oh, for sure. And I think it's going to hurt your competitiveness in the market. I mean, I think we are moving towards an environment where if you don't know where your clean and consented non-personal data is, that's going to hurt your ability to compete against other companies that have that in place. We are moving towards that world. Data is just getting more and more important.
Vas Natarajan (39:21):
Totally agree. Kate, thank you so much. We've been talking all season about the amazing force multiplier that AI is. We haven't yet really talked about until today, just what it means to implement AI in a safe and well-governed manner. And you've shed such a great light on what that means from a brass tax standpoint. And then I think importantly, how Transcend is really going to power that future. So excited for you guys, the big opportunity ahead. We're already seeing it in the business, I mean this past quarter, and I think the next couple quarters are going to be some of the most exciting in our company's history. So thank you for all that you're doing and we're looking forward to the road ahead.
Kate Parker (40:01):
Thank you. We appreciate it.