Episode 13

April 02, 2024

00:36:01

Jennifer Daniell Belissent, PhD - #TrueDataOps Podcast Ep.31 (S2, Ep13)

Hosted by

Kent Graziano
Jennifer Daniell Belissent, PhD - #TrueDataOps Podcast Ep.31 (S2, Ep13)
#TrueDataOps
Jennifer Daniell Belissent, PhD - #TrueDataOps Podcast Ep.31 (S2, Ep13)

Apr 02 2024 | 00:36:01

/

Show Notes

In this episode of the #TrueDataOps Podcast, host Kent Graziano, also known as The Data Warrior, is joined by Dr. Jennifer Daniell Belissent, the principal data strategist at Snowflake. This episode takes us through Jennifer's intriguing journey from econometrics to her significant role in shaping data strategy at Snowflake. Also, delves into the critical importance of a solid data foundation for successful AI initiatives, highlighting the motto "no AI strategy without a data strategy." Explore the concept of data products, their significance, and the shift in mindset required for organizations to create and utilize these assets effectively. The discussion also touches upon the crucial role of governance in fostering collaboration and innovation in the data space, enabling organizations to unlock the true value of their data securely. Furthermore, insights into the rapid evolution of AI, the democratization of AI technologies, and the importance of data diversity and responsible AI practices are shared, providing a comprehensive overview of the current state and future of data and AI.

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: Welcome to this episode of our show, true Data Ops. I'm your host, Kent Graziano, the Data warrior. In each episode, we'll bring you a podcast covering all things related to dataops with the people that are making dataops what it is today. If you've not done so already, be sure to look up and subscribe to the Dataops Live YouTube channel where you'll find all the recordings from past episodes. If you missed any of our prior episodes, now is your chance to catch up. Better yet, go to truedataops.org comma and subscribe for this podcast now. My guest today is my friend doctor Jennifer Bellicant, the principal data strategist at Snowflake. Welcome to the show, Jennifer. [00:00:46] Speaker B: Thanks, Ken. Super excited to be here with you today. [00:00:50] Speaker A: So for those who don't know you, can you give us a little bit of your background in data management and your journey at Snow Snowflake? [00:00:58] Speaker B: Wow. I don't even know where to start, but my background in data really started. It started actually back a long time ago. So I won't give you the whole history, but I was an econometrician by training and spent some of the early years of my career running regressions alone in a hotel room in Moscow doing housing policy analysis, running kind of simulations of alternative housing programs and seeing what the impact would be on families receiving new subsidies as well as the government. So that's kind of where my journey with data started, but from a career perspective, totally nonlinear. I've done a number of different things and ended up at Snowflake three years ago after having been an analyst at Forrester Research for twelve years. And during my time at Forrester, started looking at the data and analytics space, spent a lot of time around what new roles, roles of the CDO, how to build out data organizations, really. You know me, I'm not necessarily a technologist, but I've spent a lot of time looking at the other two legs of the stool, if you will, the people in process side of things, and how to really build out an effective data and analytics strategy by having that more comprehensive vision. [00:02:21] Speaker A: Yeah. So what's your PhD actually in? I never asked before. [00:02:26] Speaker B: That's a good question. It's actually officially in political science, but it has a concentration on organizational theory. And my dissertation was on fiscal federalism in looking at Russia, and it was about intergovernmental budgetary relationships. But a lot of it is really about, again, organizational theory and how you motivate people and how do you create the structures that drive the outcomes that you're looking for. So as odd and esoteric as that might seem, like taking some of the things that I studied and applying it into this new world of data and analytics kind of makes sense. [00:03:11] Speaker A: Yeah. That explains, I'll say, the focus you've had on things like data culture and data literacy and all of that is. That explains your background and that angle that you've always had in your conversations and your blog posts and things like that. And you're from the US, but you live in the alps, is that right? [00:03:34] Speaker B: I do. And actually, the picture that you see behind me is not far from my house. I live in the Chamonix valley and have been here for 14 years. Wow. My family, my husband and I, and our two kids, as I say, we escaped the Bay area, uh, left Silicon valley for another valley that's probably a little prettier. Um, no offense to anybody still there, but, um. But, yeah, we made that move 14 years ago for what was going to be in a year, and it has extended. [00:04:04] Speaker A: Wow. Yeah, 14 is a little bit longer than one. [00:04:09] Speaker B: We sold it to my kids as, you know, kind of an adventure. We're going to go spend a year in France and, um, and then, you know, when we kind of sat down and took a vote as to whether we should stay, which we did repeatedly for a few years, it was always, yes, yes, we're gonna stay. So we're still here. [00:04:27] Speaker A: That is an adventure. And I assume your kids are getting to learn a foreign language as well. [00:04:32] Speaker B: Well, they learn foreign language, which they actually already knew, but most importantly, they learned a love of the mountains and their excellent skiers. [00:04:41] Speaker A: Yeah. Having spent most of my adult life in Colorado before I moved to Texas, I completely understand that. Yeah. Loved living in the mountains and grew up skiing in the northeast as well, so, yeah, it's a good skill to have just for fun. [00:04:59] Speaker B: Exactly. [00:05:01] Speaker A: So a lot's happening out there in the data space these days. And as strange as it was to me, data was even discussed at the World Economic summit in Davos, which I saw you attended. So, as a data strategist, what are you seeing out there? I know you were at the Davos summit and obviously heard quite a few things there, so give us a little background and what's hot out there and what are people talking about? [00:05:32] Speaker B: Absolutely. Well, I mean, you would have to be living in a cave if you haven't followed that. AI is now everywhere, and what we've seen is this explosion on the scene of Genai over the last couple years has really led to an acceleration of all AI initiatives. It's like people are now really interested. Board level board members are asking about it. And I hate to use the term, but I'm going to because people do. It's really democratized, at least the interest in AI. And as we all know, and as the former CEO of Snowflake has always said, there's no AI strategy without a data strategy. I was just also at Mobile World Congress, and as someone said there, telcos are always looking for, well, they're looking for ways to monetize some of their investments that they've made in their network. And so they're looking for growth opportunities. And as people are saying now, just like at Davos, AI was everywhere at World Congress, and people are saying, if AI is the fuel for growth, data is the fuel for AI. So what I'm really excited about as a data practitioner is that, you know, there's this renaissance, there's this, this renewed interest, and it's not that it had ever gone away, but now, you know, everybody is excited about AI and the understanding that you've got to have a robust data foundation to be able to do data, you know, to be able to do AI well. So that's what I'm excited about, seeing that acceleration. And I'm always out asking our customers, repeating that mantra of no AI strategy without a data strategy, but asking them pointedly, does your data strategy match your AI ambition? And that's what it's really about? [00:07:26] Speaker A: Yeah, that's a good question. I haven't heard it put that way. But that's a really good point because you could have all these visions of what you want to do with AI. But if your underlying foundation based on your data strategy really isn't getting you into a position to actually support the AI strategy, right, that's, that's got to be there. [00:07:50] Speaker B: Exactly, Ken. I mean, if you can't find your data, if you can't access it, you know, if it's crap, you know, we, we all know the term garbage in, garbage out. You know, you're just not going to get the, the insights that you need. And the one, one of the anecdotes or the metaphors I often tell is I show this slide with the scene and you'll remember it. It's got Maverick and Goose sitting there in the classroom at Top Gun, and they whisper to each other. And Charlie, the instructor, says, you know, excuse me, lieutenant, is there a problem? And he says, you know, the data is inaccurate. And so I start these. Present your data on the MiG is inaccurate. Right. So I start presentations and say, you know, imagine if the data that you have on your fiercest competitor is wrong. You know, you might make a decision that's going to put you at a disadvantage. You might take a risk that you don't need to take. You might invest in something that's completely the wrong direction. So I use that top gun reference to say, you know, you really don't want to be in that situation where the data is inaccurate. [00:08:57] Speaker A: Yeah, yeah, that's definitely, definitely a problem. And I don't think people think about that. They somehow think that just because of things like Cha, it looks so easy from the outside that they're going to throw some sort of gen AI on it, and whatever it is they've got in their data lake or their data swap will suddenly become valuable and propel them off to the next level of their organization without ever really thinking about, well, how good is that data and what impact is that going to have? I've used the garbage in, garbage out statement myself a lot in the last six months, talking to people about this big thing that people have to really get their heads wrapped around, that it doesn't happen magically any more than data warehousing or anything else happened magically in the past. It never did. Everybody hoped it would there be some tool that you could just throw at it and push a couple buttons and it would be great. No, that's not the way it works. [00:10:03] Speaker B: Yeah. I was just talking to the CEO of Siemens Energy, Micheline Casey, who's a longstanding CEO. She's been in a number of different roles, and I love the way she put it. She said, you know, it seems like our board members think that Genai is like the magic eight ball. You know, you shake it and you get, you know, like, you get. You get the answer and, you know, she was talking about how you have to set expectations within the organization. [00:10:27] Speaker A: Yeah. [00:10:28] Speaker B: And I think that that's really telling, you know, that it's. It's about. It's about slowing down. You know, this is a. And, you know, we see this frenzy, we see this hype, but a lot of what I'm hearing from data leaders about putting that foundation in place is that they need to, you know, they need to enable that kind of excitement and maybe experimentation, but at the same time, slow down so that they can accelerate into the turn, out of the turn. You know, you slow down into the turn. You know, for those drivers out there, you slow down into the turn so that you can line yourself up and accelerate out of the turn. And so that's what we're seeing people do, you know, slow down, put the foundations in place, put some of these practices in place. You know, get your, your, your data ops, your ML ops in place so that you can accelerate out of the turn and you've got those processes, you've got those best practices. You know, you've recognized and shared best practices with others in the industry and you can accelerate. [00:11:28] Speaker A: Yeah, yeah. Just putting the pedal to the metal right out of the gate doesn't necessarily get you where you want to go without crashing. [00:11:35] Speaker B: You're going to hit the wall. Yes. [00:11:39] Speaker A: Yeah, I like that. That's another good analogy. So data products. Yeah, that's one of our hot industry buzzwords. So what's your take on that? [00:11:51] Speaker B: I love data products, if you can really say that, because when you think about, we've talked about data as the new oil data is gold data, is this data? I hate those. But what we're really ultimately talking about is the need to ensure that data can be used. So, you know, data is a resource. It's actually a renewable resource because more people can use it, you know, and that's, you want to get to that. You want to get to the point where you've got data gravity, where people are excited to use it and new, you know, applications and services and things are going to come to it. And it's by creating data products that you can really best do that, you know, quality data products. We can get into the data quality and we can get into all of the requirements that are undemonstrated, those data, data products. But really, you know, if you look at the definition of product, it's something that will be used. You know, you're going to create something for use. And so it's, you know, it's obviously a complicated shift in thinking within organizations because ultimately you need to find out, you know, who's going to be that product owner. And what does that mean? And how do you listen to your customers that are downstream and capture their requirements? You know, you might have multiple customers. So how do you coordinate different requirements and identify maybe, you know, kind of the common requirements, kind of the least, you know, kind of the foundational piece you want to get to. Maybe you can create a data product that meets, you know, whatever it is, 80% of the requirements of, you know, your potential customer customers, so that the incremental work that they need to complete them is less than if they were rebuilding a data product on their own. So the idea and the way I think about it is if you move back upstream so that you get to the source, each source data, data source owner needs to be thinking about building blocks, creating data that comes from their sources, that are building blocks, that can be aggregated and kind of that are components into other products downstream. So just like we think of the car industry or other industries, we're building components that can be assembled into a customer 360 or a product 360 or a supply chain transparency, supply chain optimization tool. Whatever the ultimate end data product or native data application might be, it's built. It should be, ideally be built on a set of data products that are assembled into that final application or decision support tool or whatever you want to call a dashboard tab, whatever you want to call it. [00:14:37] Speaker A: Yeah, it's a lot like we used to talk about when I learned programming very early on in my career. We did modular programming, and so each module was, it was self contained. We could test that little module and if we needed to make changes, you know, it was a minimal amount of coding and changing and testing, but it all had to then feed into the bigger software product, as it were at the time, so you could build it and you had all these building blocks. And, you know, the idea of reusable libraries. And we see that today with, even with some of the LLMs and things that folks are doing, snowflake, like with the snow part containerization, all those concepts that you can try to build that. I guess the challenge, like you said, is the shift in thinking. We're not thinking about building a massive enterprise data warehouse. We're now thinking about how do we build a data product incrementally and how do we break that down into its component parts. I think supply chain is probably a good place to look at. They do as supply chain bill of materials and things like that, is all these different parts that are valuable in and of themselves, but also can then be used to build things that have even more value in aggregation. [00:16:02] Speaker B: Yep, absolutely. And then a big part of, you know, in terms of best practices in organizations is ensuring that you're, you know, collaborating across the organization, that you're, like I said, kind of coordinating requirements and things, but you're also ensuring that the data products that you're creating can be discovered by others. You don't want them recreating a data product or going back to the source team or a certain team and requesting another data product. So we're excited to see how people are using tools within Snowflake for registering these data products or even model registries. You hear that a lot around AI these days. How do you have a registry of the different models? And essentially a model is a data product, but, you know, how do you make sure that what you've already created is discoverable within your organization or outside your organization? You might want to be monetizing it or sharing it with partners, sharing it across a broader ecosystem. So how do you make sure that that's discoverable so that you are enabling that kind of reuse? And any reuse is, you know, is just, it's increasing the ROI on that particular, you know, that particular data product or that, you know, the investment that you've made in the data set, et cetera. So I see data products and all that we're seeing in terms of changing the best the practices in an organization towards reuse as really valuable practices within organizations. I mean, that's how we're going to be able to do all this at scale. [00:17:38] Speaker A: Yes. Yeah. And I think, again, the challenge there is the cultural challenge of people being retrained to think of, did somebody already build this? Once they figure out what they need, rather than thinking, oh, we need to build this, here's how we can build it. The first question they should be asking is, did somebody already build this? So they even think, because you could have the greatest marketplace in the world with all kinds of metadata and examples and all sorts of things, but if nobody looks at it, then they're not going to know that it's there and they could use it and can waste a lot of time, energy and money building things that somebody else already built, even within your own organization. [00:18:22] Speaker B: You mentioned earlier that I have a focus on, or I'm very interested in communication and building a culture and evangelism, and that's a big part of it. How do you communicate all this across your organization? And some of the CDO's that I've been talking to lately have really focused on, you know, I spend a lot of time doing show and tells within my organization, talking to people, showing people what we have, introducing, you know, our data catalog or our model registry, just so that people know that it's there. I mean, as organizations get bigger and bigger and more and more people are building these kinds of products, you know, if you're not coordinated about it and you don't have a centralized repository or at least, you know, it doesn't have to be centralized, it can be distributed, but you need to have a place where you can go find out where these things exist within your organization. And that's something that we're seeing quite a few people focus on. [00:19:19] Speaker A: So where do you see data ops fitting into this landscape with all this stuff going on? [00:19:27] Speaker B: So, like I said, ultimately, companies are moving towards having as much automated or based on AI as possible. And like I said, the only way that we can really scale this effectively, efficiently, is to have processes in place. You know, data ops, mlops, you know, operational processes that can scale. And so the having those types of things in place, it's the foundation. It is the landscape on which everything else has to be built. [00:20:02] Speaker A: Yeah, because I look at you use the analogy of auto parts and, you know, components in a car, and without the assembly line, then you wouldn't be able to efficiently build those cars, no matter how. You could have warehouses full of tires and rims and engine parts and carburetors and electronic ignitions. But if they were having to assemble that all by hand every time somebody ordered a car, we would all, you know, pretty much most of us would still not be driving cars because we wouldn't be able to get them. [00:20:39] Speaker B: Absolutely. Absolutely. Yeah. [00:20:42] Speaker A: Cool. So in one of your recent posts, you talked about responsible AI and the need for data diversity. And you mentioned the old phrase trust but verify in relation to all this. So do you have any recommendations for organizations on, you know, how they go about getting there? [00:21:03] Speaker B: So data for data diversity is a topic that's got it kind of near and dear to my heart, you know, because it touches on so many different things. You know, we all hear about, you know, chat bots that have hallucinated and don't have the appropriate, you know, they're inaccurate in their answers, etcetera, or they just don't have an answer. You know, you. You ask a very specific question about something that they have not been trained on and they don't have an answer. So in the enterprise world, you know, I really promote this notion of data diversity, and, you know, the first piece of it is having access to your own data, all your data. So breaking down those silos, making sure that you have those data catalogs so that people know where to find the right kind of data, it's, you know, increasingly tapping into your unstructured data. And that's one of the things that we're seeing at Snowflake, is that, you know, more and more, you know, IDC estimates that 90% of enterprise data now, I think people used to say 80. Now they're saying, yeah, it used to be 80. Yeah, it used to be 80. And IDC just published something that said it's 90 is unstructured data. And one of the things that we're seeing on our platform is that seeing an uptick in access to and the structuring of unstructured data so that people can do more queries against it. So breaking down those internal silos, accessing unstructured data, collaborating with partners, sourcing third party data. So, like, through the snowflake marketplace, finding data sources that will complement what you have internally. As one CDO once told me, with our own data, we can only look internally, which makes sense, but we need to see what other waves we could ride in on. We need industry benchmarks, regional trends, those kinds of things. Things. So, you know, how do you access that kind of data? Maybe even looking at how to create synthetic data to, you know, to ensure that you've got the right, you know, distributions and populations within the data that you're. That you're running your model on. So those are the five steps that I've outlined in terms of ensuring that you've got diversity in your data. And it's all about having that diversity that is going to mitigate the risks of, you know, hallucination or bias within. Within AI. But the trust. The trust, but verify. You know, we can't just accept an answer from, you know, from something, you know, we don't necessarily always just trust. Well, we shouldn't, but people do, you know, trust a newspaper article or trust a news source. You know, you should be asking where the underlying source is. You should be trying to understand, you know, what the logic and that conclusion is. And so that's really what trust but verify was about in this context, was holding our models to some of the standards that we've just established in our lives. What data was it trained on? We need to be able to ask. And now new things like retrieval, augmented generation, you can point a model to some specific sources, which makes it easier to know what sources that those answers come from. But it is, it's a practice that we need to get into, not just accepting an answer as, you know, the truth, but kind of going back, thinking critically about it and seeing if we can source and verify what, at least where it came from. [00:24:36] Speaker A: Yeah, because even back in the early days of data warehousing, that was a question that always came up. I know I built many systems where, you know, the first round of reports, somebody would question the results and say, I don't think that answer is right. I think your data warehouse is wrong. And we had to be, I learned early on about data lineage. Right. We had to be able to trace and say, okay, well, you said this was the source of the data. These were the business rules. Here's what we applied, and that's how we got the. No, that can't be. That can't be right. And then we go back and look at the source system. It's like, okay, here's your data in the source system. They're like, oh, that data is wrong. Yeah, and there's nothing that I can do about that in building my data warehouse. If the data you're giving me is wrong, then yes, the answer is not what you were expecting, because it's not right. It's just not right. You got to go back and correct the source. And so now we're seeing that. I'll say probably it's an order of magnitude more important now if we're going to throw things like Gen AI at it exactly right. To really be able to understand that. [00:25:44] Speaker B: And I think as we're expecting more and more decision makers to make decisions or, you know, use native apps or as I like to call them, decision support tools, but using data, using, incorporating the insights from these models and data into their decision making process, we need to teach them how to ask those questions so that they can be comfortable, so that they can have the confidence to say, okay, I'm going to take that insight that tells me that I need to raise my price in this particular market, and I'm going to do that because I know that it's based on this data and I know that the underlying logic in this model. So, yes, we can't, I mean, I know we can't necessarily know exactly what happened under the covers in certain AI models, but we can under, in many cases, I mean, obviously, some of the deep neural networks, it's much harder. But in a lot of what we're using today, we can teach somebody what the underlying business logic was, and we certainly can point them to what those source date, you know, the source data was. So I think that, that in order to, to really put these insights into practice, you know, and have an impact on the organizations, we have to be able to do that kind of trust and verify and build the confidence because ultimately, you know, we can talk about data all we want, but if, you know, if, if nobody's taking action from these insights that we're delivering, there's no impact on the business and there's no value that's added. So it's really hard to justify the ROI in the data, in the AI, etcetera, if nobody's using it to make decisions so we need to be, you know, kind of thinking about how we invest in those decision makers and make, you know, get them confident. Of course, you know, a decision maker could be a machine in some cases, but, yeah, it's still as important to verify. [00:27:37] Speaker A: As, you know, when we talk about true data ops and the philosophy of true data ops, we had our seven pillars. I wanted to get some quick feedback from you on of those seven pillars of true data ops. You know, what do you think is really important in this area? [00:27:54] Speaker B: So I knew that question was coming, Kent, and I was, you know, and I hesitated between two. So I'm going to go with. And the two were governance and collaboration, because those are two things that are very near and dear to my heart. But I'm going to go with governance because really, ultimately, there's no collaboration without governance. And the way, you know, the way we think about governance at Snowflake is that there are three pillars. It's about knowing your data, what's in it, you know, tagging it, classifying it, being able to know where you need to apply some of the security measures, the protections, and ultimately do it. So it's, you know, it's, know your data, secure your data, and, and ultimately be able to unlock the value of your data is the third. And so that the unlocking, the value is the collaboration piece. And so being able, you know, putting those first two in place allows you to unlock. And we've just recently gone through an exercise where we've looked at a year's worth of usage on the snowflake platform, and we've looked at the uptake in certain features, things like object tagging and applying masking and row access policy. And we've seen massive upticks in the use of that functionality within the platform. But interestingly, what's increased most is the use of these privacy protected, these protected data sets. And so what that says is when people can truly know their data and apply the right policies to their data, then they can be comfortable using it in those appropriate ways, they can be comfortable allowing the right user to use it for the right purposes. And that really is. So that reinforces this notion of if, you know, and if you can secure, then you can unlock the value and you can collaborate with it. And I just think that's really exciting to see. [00:30:04] Speaker A: Yeah. And so that becomes part of, I'll say, the process of building a data product that's going to be used is having the governance in place and the tagging and the appropriate masking. So you're not inadvertently exposing the wrong data to the wrong people in your data product. It's not just about throwing data out there and saying, okay, everybody go for it. Here's a little, here's a little, here's a little data product with some information in it and knock yourselves out. [00:30:35] Speaker B: Exactly. Part of the role of a data product manager, and we are seeing more and more of those these days, is to identify what the requirements are for that data and who can use it. What are the governance policies that need to be applied for me to make this a process product that I can put out in the wild? You know, it's like putting the label on a product. You know, here's the warning label, not to be taken by children under two years old or only to be consumed, you know, in this much, in this quantity. But it's that kind of thing. And so we do that with all products. You know, we put warning labels and we put manuals for how to use them. And that's what we're doing with data and data products is we're putting that governance in place so that we can be confident that people aren't going to get in trouble using it. [00:31:24] Speaker A: You know, that that's good. That's good. That's encouraging that the governance has ticked up because that's one of those, I'll say, standard data management practices that we've been talking about for a couple of decades, that many organizations just didn't feel the need to make that investment. But now that, again, we're getting into Genai and we're going to, we're going to do all this stuff with data, and then you throw in data products that there appears that people are waking up and going, yeah, we probably shouldn't just put that out there. We need to have some controls on it. And that's the data governance piece. [00:32:02] Speaker B: You know, before I started at Snowflake, when I was back at Forrester, I wrote a report on data governance for data commercialization. But, you know, you could be commercialization, monetization, collaboration, whatever you want to call it. But this notion of allowing others to use your data. And I wrote about bidirectional governance and bidirectional lineage. So you know where your data came from, you know the provenance of your data. But in this world where we're collaborating with people and others are using our data, we need to know where it's going as well. It needs to be bi directional. So where the data comes from or how it's being used. And that's another element that we are seeing within the platform is monitoring data usage and understanding how it's been used and in what kinds of use cases. And you can not only control upfront, but you can know what's happening downstream and track that usage when you're, you know, when you're using and sharing data on the snowflake platform. So that's another element that I think is really exciting. [00:33:15] Speaker A: Well, unfortunately, we're like out of time already. I knew, I knew our conversation was going to easily fill up our time slot today and we got so many other things that we could talk about. So what's next for, for Jennifer? Where are you going to be speaking in the next couple of months? [00:33:32] Speaker B: So as we mentioned earlier, next week I'm, next week I'm going to India, but which I'm excited about. Lots of excitement around AI and Gen AI in India, but in the next few months I'm going to be a data innovation summit in Stockholm. We'll be speaking there about what we've talked about today. Does your data strategy match your AI ambitions? That kind of thing? And then I'm going to be a snowflake summit in June and very excited to connect back with my colleagues and all of our customers and just feel the excitement that we're seeing around the snowflake platform and a lot of the announcements that we've made around AI as well. [00:34:12] Speaker A: And Snowflake summit this year is going to actually be back in San Francisco, right? [00:34:18] Speaker B: Yes, we're no longer in Vegas. It will be June 3 to the 6th in San Francisco. Yeah. [00:34:25] Speaker A: Very good. And so what's the best way for folks to connect with you after the show here? [00:34:30] Speaker B: The best way is to either reach out via my snowflake email or through LinkedIn. And I think you've got the. There you go. The way to connect with me on LinkedIn. [00:34:41] Speaker A: Awesome. Well, it's been great having you on the show today, Jennifer. Appreciate your time and you fitting us into your, I know, incredibly busy schedule there with all that you do at Snowflake. So thank, thanks for being with us today. You know, thanks to all the viewers for joining. Be sure to join me again in two weeks. One of my guests will be Doctor Shantana Thule, who is a true data scientist and engineer with a PhD in physics and nuclear science. Currently, Shantana is the director of data at UP Solver. So I think we're gonna have a, I met her in person at day to day Texas back in January. We're gonna have a really good conversation with her about what's what's happening in her world with data and data science. And so, as always, be sure to like the replays for today's show. And tell your friends about the True Dataops podcast. Don't forget to go to truedataops.org and subscribe to the podcast so you don't miss any of our upcoming episodes. So until next time, this is Kent Graziano, the data warrior, signing off for now. [00:35:52] Speaker B: Thanks, Kent. It's been great to be here. It.

Other Episodes