Episode 45

January 31, 2025

00:34:28

Paul Rankin - #TrueDataOps Podcast Ep. 45

Hosted by

Kent Graziano
Paul Rankin - #TrueDataOps Podcast Ep. 45
#TrueDataOps
Paul Rankin - #TrueDataOps Podcast Ep. 45

Jan 31 2025 | 00:34:28

/

Show Notes

Paul is a Senior Data Architect, BI and Analytics Solutions Specialist with a focus on cloud architecture and the data management ecosystem. He has over fifteen years of experience designing and delivering data integration and BI solutions for a large number of national and international organizations. Paul currently lives and works in Switzerland, where he recently wrapped up his role as Head of Data Management Platforms with Roche Diagnostics and now consults with other enterprise data teams.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: All right, welcome everyone to the True DataOps Podcast. This is our episode and I'm Kent Graziano the data Warrior. Each episode we try to bring you a discussion with around the world of DataOps with the people that are making DataOps happen today. So be sure to look up and subscribe to the DataOps Live YouTube channel, because that's where you're going to find all the recordings from our past episodes. So if you've missed any of the prior episodes, you know, let's get the new year off to a good start and go back and take a look at some of those and see what you might learn and see who you might hear from. Better yet, you can go to truedataops.org and subscribe to the podcast so you don't miss any of the future episodes. Now, my guest today is my good friend is a very experienced solution architect and now independent consultant, Paul Rankin. Welcome back to the show, Paul. I think this must be. Is this the second or third time you've been on? [00:00:57] Speaker B: Yeah, thanks, Kent. Great to be back. I think it's the second one for. With you, you know, and did one with, with Frankfurt with Frank. Yeah. So, yeah, great. Love data. Yeah. [00:01:14] Speaker A: So for the folks who don't, don't know you very well, will you give us a little bit of your background in data management and a bit about your career and, you know, where you've been and where you're going? [00:01:27] Speaker B: Yeah, I mean, like, I suppose, but like yourself, Kent, you know, started way back when, you know, really the only solutions around data specific solutions, you know, were pretty much mainframe based solutions, talking SaaS and, you know, base SaaS and DB2 mainframe. Suddenly moving into that realm of BI, if you like, with cognos and business objects appearing, you know, on the scene. And then always really been into data and analytics. And then that moved through to. I was a consultant at IBM and that moved more into the TM1 and planning where I did a bit of consultancy there and then came across to Switzerland again. The cloud was taking off at that time. Then we had Azure and AWS and then moved to Accenture where of course they were at the forefront of that evolution. Aware on the data space. Everyone was migrating their data to data lakes and data warehouses on the cloud. And then I started working with, as you know, our good friend Omar at Roche, who wanted to do something a little bit different. It was about five years ago now with jumax Data Mesh Paradigm. And that's when we started working directly with ThoughtWorks to try and I suppose prove data mesh at scale, know, at Roche. And you know, that was kind of the journey there through Data Mesh. Now I'm an independent consultant. I'm kind of helping, you know, with my experience of, you know, most companies, they want two things, right. They want, they want speed. Yeah. But they also want good data management practices and governance and quality and things that come with it. Right. And what I've found, you know, in the last year of going independent is, is that usually companies, you know, take one over the other. Usually there's, you know, they, they want to go fast, but they forget that, you know, the data management practices and the governance, you know, they kind of leave these behind. And now, you know, what we need to do. There is no choice. Yeah, we need both. Absolutely. Together. And this is what I'm doing now, helping, you know, some organizations, trying to, you know, get to more of a modern data strategy. But at the same time, you know, it's, it's, it's not one good for one good fits all as you know, you know, it's company specific. You need to understand the maturity of the company, how advanced they are in their data journeys, how, you know, I suppose distributed their whole ecosystem is because, you know, a lot of people make a mistake of saying, okay, distributed architecture, distributed ownership, data meshes is what I want to do. But they look at their organization or they forget to look at their organization. And actually their organization is not that at all. It's very centralized. They have a very centralized process of building, managing, maintaining data pipelines and you know, to, to change a company like this is actually, you know, really difficult as opposed to a company that is by nature a lot, a lot more distributed. So anyway, just, I digress, but this is where I am today and trying to help these guys move in the right direction, I would say. [00:04:43] Speaker A: Yeah. And you're also a data vault expert too, because I remember correctly. I think the first time we ever talked was you were asking me, wanted to ask me about doing data vault on Snowflake back when I was working at Snowflake. Yeah, you were Accenture working at Roche, I think. [00:05:00] Speaker B: Exactly. I have no datavo expert. Right. Let me tell you, there is. I've worked with a lot of really amazing data engineers, data vault modelers who are much more proficient than me. But what I can do in one of the organizations I'm working with just now is helping them. They are full in data vault. They have a very big team that is providing a lot of data models and, but they also want, you know, the company is also very distributed and they want distributed ownership. Not full data mesh, but distributed ownership. So I'm helping them, you know, try to fit Data vault methodology, you know, side by side with this distributed ownership, because, you know, they're clearly, and we talk about this all the time, there is, you know, some things that don't fit, let's be honest about it. Yeah. And when you really break it down and deconstruct it. Yes, absolutely. Both can work together, but you do need to set some guardrails, some boundaries, and some new ways of working well. [00:06:15] Speaker A: Yeah. So now you're back to good data management practices. [00:06:19] Speaker B: Of course. Exactly. It all boils down to good data management practices. [00:06:25] Speaker A: Yeah, well, yeah, I think this season we were trying to take a little bit of a step back on the show here and look at how the world of true data ops has really evolved and what have we learned over the last couple of years. And you've certainly been right in the middle of all of this with your work with Roche and other companies. So you've been helping customers like Roche try to take advantage of things like Data Mesh and Data Vault and with tools like DataOps Live and Snowflake. Right. So give us a little flavor of what you've seen and how this space has evolved in the last couple years and, you know, kind of what, what you're seeing today from, you know, maybe when we first started and we first met. [00:07:17] Speaker B: Yeah, I have thought about this. Somebody else asked me this question a little bit. What is the big difference you've seen in, you know, and just let's. Let's just go back, you know, four, five years. [00:07:28] Speaker A: Yeah, it's really not that far back. [00:07:29] Speaker B: Yeah, no, it's not. You know, and everybody was starting to really talk about, you know, speed. Right. I mean, the data is coming so fast. There's so much of it, and the business, you know, need to use it to their advantage. Absolutely. That's what it all boils down to. Competitive advantage for a lot of these organizations. And they need speed for this. Yeah. Not just speed of pipelines or development, but they need clearly, you know, speed to insights, speed to market. And what changed, right, from five years ago was the developer, the standard, let's call it BI developer. Right. The database developer was absolutely not skilled and equipped for this change. Right. Yes. He had two sides of the fence. Right. What we were trying to do is we were trying to take traditional BI developers, you know, I don't know what visualization developers, pipeline developers, whatever you want to call them, five Years ago and we were trying to make them a little bit. Well, we, as we know, today they're data ops developers, right? But back five years ago, it was kind of DevOps developers. We were trying to shoehorn these guys that had the business knowledge, a little bit of business knowledge about how to model data, how to build pipelines, we were trying to shoehorn them into. Now actually, to go on a distributed ownership journey with data mesh and everything else, you need to be a bit more than that. You need to understand CI cd, you need to understand change, data capture, you need to understand, you know, good branching, merge techniques, publishing, you know, deployment, all. [00:09:22] Speaker A: That well, and product management. Because now we're talking about data products even, right? [00:09:28] Speaker B: Exactly, exactly. That was also, you know, the thing in the world or the industry, right, that we're providing these huge, you know, copious amounts of developers to every company. We're actually not ready for this. And it's been a huge learning process, you know, not just for the companies who could not get the right developers or data engineers, whatever you want to call them, but also for the vendors that were providing them, that were supplying them, you know, because I mean, as you know, we worked with big guys like Accenture, tcs, you know, data, you know, these guys. And you know, really we would be asking, okay, we need a data engineer to work in our cross functional data product team, our vertical team. And immediately, you know, five years ago they would be asking, okay, what technologies, right, do you use? Right? And it doesn't work, you know, it doesn't care. I mean, we don't care if we're using Snowflake or AWS or databricks or wherever. We need data engineers, you know, fit for the modern data platform. And that's the journey that I see the most changing, you know, in this whole thing. Four year journey, five year journey. But thankfully now it's getting a lot better. Vendors have understood, you know, the skills that modern data engineers need. Because it's funny, we learned our lesson pretty quickly in the first six months at Raj and we started asking vendors for DevOps engineers, right? Because yes, they might not know anything about data, but absolutely, they, they were much better at doing that whole good data management practices, as you rightly said. So I think it's a bit of mix. If you have a DevOps engineer that knows about data, this is what you need. But yeah, that's the biggest difference. [00:11:25] Speaker A: So from your perspective, what is DataOps really and how important is it today for companies to be adopting some sort of data ops practices? To manage these massive landscapes that we now have. [00:11:42] Speaker B: I mean, absolutely. It's important for me. Data ops, you know, is not just about, about code repository, it's not just about CICD pipeline. To me, it's, it's governance, you know, it's end to end governance of the build and the manage and the maintenance of data products. Now we call it data products today, we might call it something else tomorrow. But essentially if you have and it's right, I mean some companies do not, you know what the distributed ownership, they're very monolith still, they're very central. Basically a lot of companies have opted for IT organizations, the traditional data warehouse team or integration team, building source aligned data products. Right. That some companies have opted for that and publishing them as data products, you know, and that works. Okay. And actually what you end up with is quite a highly skilled team of data engineers, you know, building, managing, maintaining source aligned data products, you know, using tools like Data Live or any other tool. And this, this works. Yes, you've still got the monolith effect, you've still got a huge backlog, right? But actually, you know, your team stays pretty central. You have the skills all in that team and it actually works. Yeah, but when you suddenly move right to this distributed ownership where, I mean, at Roche we had like a thousand developers, over a thousand developers, right, all developing and distributed teams. And you know, how do you standardize that? How do you standardize the build of data products? Right? What you need clearly is some kind of tool, you know, like Data Live or you know, as, you know, some people go for the built option, the home build option with you know, GitLab, Automate, now Liquibase all together, you know, packaged up as a solution, right? Whatever you choose, you know, you need to understand that you cannot just give them it and, and say, all right, get on with it. To me, DataOps is a process of a guided developer experience. That is the key. It's a bit like showing your developers, here you go, here's your AWS account, go and build me a database or a data product or whatever. Knock yourself out. Choose whatever360 services you want and give me it next week. You know, I mean, honestly. And this is, this is a bit like TED Drops. I mean, essentially, you know, if you give them the repository and tell them to build whatever they want using whatever orchestra, using whatever language they want and whatever the way they want, you'll get a complete mess. Yeah, and you know, this is where the governance comes in and I work with a lot of customers to say you need a reference project, you need a central reference project that is allowing child projects to pull from and use SDKs, reusable components that are managed and maintained by the data platform team, so that you're governing the way that all these developers are building because they're coming so fast that you're never ever going to be able to police and monitor and support all developers. But if you are publishing reusable components for them to pull in and to configure to build their pipelines, that gives them speed and it also gives you scale as well, at the end of the day. And this is what, to me, it really is. It's the developer experience, but the governed developer experience. Yeah. [00:15:31] Speaker A: Okay. It's been like it's over four years now since we, excuse me, first published the truedataops.org site and you know, the philosophy of True Data Ops, the Dummy's Guide to DataOps, you know, Justin and Guy and I all worked on that and then we had that seven pillars of, of True Data Ops, you know, for listeners, if you haven't looked at it, you can go to truedataops.org the number 7-pillars to, to look those up. But do you think those seven pillars still resonate today? [00:16:09] Speaker B: I mean, of course they do in some way or another. You know, I mean, I don't think anybody is really rolling them off their tongue every time they're building a data pipeline, if you ask me that. But I still think that they will evolve as well. Because let's face it, data ops as a paradigm, as a principle, guiding principle, is still fairly, a fairly new, you know, I mean, we're only talking, you. [00:16:45] Speaker A: Know, five, four years published that. [00:16:47] Speaker B: Yeah, four years since you published it. Five or six years when someone heard the term, probably. So, you know, and, and we had guys like Justin and, and Guy and yourself really at the forefront of, you know, trying to define, you know, what data ops really means. And I think that definition will always evolve and I think the seven pillars, you know, will have to be continually evolving as well. And it's, it's a bit like, you know, safe and, and these things. And, and you know, what is it? The, you know, I forget the, the findable, addressable characteristics, you know, interoperable. Yeah, but, but that evolved from fear. Yep. So, you know, fe, again, about maybe eight years ago, we heard FAIR for the first time and that lasted for a while. And then as soon as Data Mesh came along, actually FAIR was not good enough, or it was good enough, but it didn't service everything. So somehow the FAIR principles evolved into the, I think it's the seven data product characteristics and that will evolve even further as you get. And I think that even Guy decided that the seven characteristics for data products that JAMAIC published was, was not enough at some point. So he, I think he did publish. [00:18:18] Speaker A: Some things about what, what really defines a product. [00:18:21] Speaker B: Yeah, yeah. So, you know, I think, you know, by, by nature these guiding principles will always evolve, they will always change. But I think it all, you know, at the end of the day, like we keep saying, it boils down to the practice of good data management. It's not just, you know, CICD, it's not just DevOps, you know, it is good data management practices. And now we're actually really trying to understand now, only now I would say we really are understanding what those good data management practices are in a modern data world, right, where the business needs speed, right. But as a whole organization we need security, governance, you know, all these good data management practices. But it's, you know, it's not easy. [00:19:09] Speaker A: Yeah, well, and now we've got, you know, the explosion of at least interest and attempted usage of AI and genai to even build data products. [00:19:25] Speaker B: Right. [00:19:27] Speaker A: So how important do you think all of this is, you know, the data Ops principles and things like that with that now being inserted into an already complicated data management space? [00:19:40] Speaker B: Yeah, yeah, I mean, I just don't think we're, we really understand what the, what that effect will have fully yet. You know, it's not going away clearly, that's for sure. It's here to stay and companies, you know, have to really get their act together. And by the way, I'm no expert on it, let me just put it out there. But companies are really scrambling around trying to find, you know, the best way to deal with it because you've got CIOs, CEOs that are going, yeah, we should be using more AI. But then, you know, underneath in reality, you know, you've got real multimillion pound decisions being made on data, right, that has been generated, you know, by who? [00:20:27] Speaker A: You know, someone based on what, some. [00:20:30] Speaker B: Machine and based on what? Yeah, and this is the bridge that we need to cross is where do we use AI and how does that fit into the Data Ops principles and where we don't. Because AI can be used for anything as you know, it can be used for clearly refactoring your code, even for sense checking your code. It can be used for suggesting different things, ways of doing things. All this anomalies everything. It's finding the sensible places that we can use it to accelerate our journey. And in, especially in the data Ops world now, you know, go back to the speed thing. Yeah. Currently Data Ops is clearly mastered the world of micro batch, batch type of modalities when it comes to data products. All right, now when you're, when everything's moving to real time event based triggering, then you know, a lot of the capabilities that A tool like DataOps Live is offering is maybe diluted a little bit when the event is getting triggered from a machine or from a battery or from a hydroelectric power plant or wherever. And that is where we need for me working with a lot of companies who are now moving towards these streaming modalities. Currently what I'm doing is trying to work out how the data ops capabilities fit into these type of modalities. [00:22:07] Speaker A: Well, you still gotta, I guess, you know, still gotta have that governance and management of the pipeline. It's one thing to be streaming the data into a platform. [00:22:15] Speaker B: Yeah. [00:22:16] Speaker A: But then what do you do once it gets there? Right. Are you feeding it just the raw data straight into some sort of AI, Is it going into a dashboard or is it still going through what you and I would probably consider traditional data warehouse lifecycle of some sort, you know, some sort of transformations and all of that code still has to be managed. Yeah, but even faster. Right, right, of course, yeah, maybe probably more governed controlled. But how do you accelerate all of that without some form of automation? Right. [00:22:51] Speaker B: Yeah, no, no, 100%. And this is where, you know, I think it's really the key is trying, especially when you're talking about data ops, where, where does it really now lie? Because I think it's different where we are now. When it was like five, six years ago. Yeah. And they had a clear vision that, I mean 90% five years ago, people were doing micro batch at best. Right, right. [00:23:19] Speaker A: They were trying to do real time and maybe they were hitting near real time. [00:23:24] Speaker B: Yeah, yeah, exactly. And, and most, and still most cases probably micro batch is fine. Right. For, for a lot of organizations. But, but the more and more, especially in industries that rely on, on real time decision making, clearly they're getting a lot more data to deal with and we need to, you know, understand. I call it the modalities. It's a word we can, we used it at Roche actually is, you know, it's the, it's the makeup of the patterns of the, the frequency of the data, the volume of the data, you know, the machine type that it's coming from the format it's coming from and what you need to do with it. Yeah. And all that kind of make up a modality and it's, you know, it's, I mean it's good. Now we're working with an energy company who now have battery information coming in, you know, so at one case you've got battery feeding in what it should be outputting but you've got the real output and then you've got the frequency of temperatures and weather conditions that the battery will put out more or less, you know, power. And it's really interesting. This is quite new to me and you work in more of the energy trading sector where we're exposed to a lot more real time decision making, you know, and of course, as you know this makes data really quite one more complex. Yeah. And you, you do need, your people are relying on your data pipelines to be absolutely nailed spot on, you know, giving them the right results because you know, one single mistake and these traders are making, you know, horrendous decisions. [00:25:08] Speaker A: So yeah, that would be good. [00:25:11] Speaker B: Of course, yeah. So it'll be good for someone. [00:25:16] Speaker A: What advice do you give your, your clients on sort of the buy versus build decision and you know, how to get started and having a well governed kind of end to end data ops approach. [00:25:27] Speaker B: Yeah. You know, truly again depends on the company and the ethos of the company. Are they an IT organization by nature? You know, are the engineers Traditionally companies that are an engineering IT company love to build things. Yeah. And if you don't give the people that opportunity to, to build, you know, they'll go somewhere else, another company and they'll build it there. But you know, in general, clearly you're going to accelerate your journey with a, a build, a buy, sorry, capability. I mean there is no way that I've seen any company build a data ops solution from scratch that can get up and running in production, you know, with a highly scalable, highly trusted service quicker than, you know, such as a tool like Data Ops Live. I mean that's, that's for sure. And even if they think they can do it cheaper at the start. Yeah. It is always, always, you know, the total cost of ownership is always either the same or more expensive in the long run. I did see, I saw a company once who projected the costs of a build data ops solution and actually the build went up for two years and it started going down after two years. And I was saying this is crazy. I have never seen a product, right. That's been built going down. I mean, are you going to close it down after two years or what? [00:27:05] Speaker A: Yeah. [00:27:05] Speaker B: What. [00:27:05] Speaker A: What were they thinking there? Yeah, yeah. I mean, it's like, yes, we're, we're not going to. You're not going to do any more enhancements. The underlying systems aren't going to change. You know, there's not going to be a Snowflake upgrade that changes something. There's not going to be, you know, a data pipeline change. It's like, yeah, how could you think it was going to go down? [00:27:28] Speaker B: Absolutely, absolutely. And I've seen a lot of companies make that mistake that, oh, we could do it ourselves, it's cheaper. [00:27:34] Speaker A: Yeah, that's the, that's the total cost ownership conversation. Yeah, that's the one I always, you know, point people towards is like, yeah, you've got a great set of engineers. I had somebody, you know, you'll find this probably hysterical. One of the very first Snowflake events I did, when we started doing little tours, I had a guy in the audience who was also even senior to us, been in the space for literally decades. When I'm presenting what Snowflake does and the new features, the scalable storage, the scalable compute, auto suspend, all that stuff, he's like, I can build that myself. [00:28:19] Speaker B: Yeah, great. [00:28:21] Speaker A: It's like, okay, great. And then I sat when talked with him and after I said, but, okay, so, you know, maybe you have the experience to build all of this, but what happens to your client when you retire? [00:28:33] Speaker B: Yeah. [00:28:33] Speaker A: And you leave, are there people there that you're working with that you think can handle and manage what you built? And you got thinking about it go, yeah, that's probably a good point. It's like, yeah, I can make a bunch of money building this for them, but after I'm gone, yeah, they don't have anybody that could really keep it running. It's like, well, then, you know, do your client a favor and have that conversation with them. I think you say, they say short term, hey, we've got. I worked one place once where there's like, I was doing a total cost of ownership for our data warehouse and they said I could not count the salary and overhead cost of the current employees that were on my team. It's like, why not? I mean, this is what it costs us to do all of this. Said, yeah, well, we're paying them, their employees, and if they weren't doing this, they'd be doing something else and, like. [00:29:28] Speaker B: Something that's more valuable. [00:29:29] Speaker A: Okay, so, you know, I have like a team of five people building this massive Enterprise data warehouse system that has to be maintained. They're like, no, you can't count any of the staff costs in your total cost of ownership. All we want to know about is software, how much the software cost, and maybe some. If you had to bring in consultants, but not the DBAs that are going to be maintaining it. It's like, but this is what it costs to run it. No, no, no, no, no, no. Yeah, they're basically free. [00:30:00] Speaker B: I mean, I'd like to see their balance sheet if you think it's. They're free. [00:30:03] Speaker A: Yeah, well, like I said, that was very interesting. Yeah, I think people definitely have a problem with that total cost ownership conversation. [00:30:12] Speaker B: I think that's why Snowflake is so hard for Snowflake to, you know, to do that, because it's hard to prove it at the end of the day. [00:30:21] Speaker A: Right, yes. Yeah. Because nobody wants to admit what they're currently doing or currently not doing with their time in order to say, yeah, where, where are the savings here? Right. Is there, you know, and how can we redeploy our personnel? Because, you know, whether it's Data Ops Live or Snowflake, because there's things that our people had to do in the past with the older technologies that they now don't have to do and figuring out where's that business value, what. What can they be doing with their time instead? And at the same time, are we improving the quality, which I think, you know, the governance and quality of what we're doing, especially if we're feeding it into AI, you know, can we bump that out? And how much is that worth in comparison to what we were doing? [00:31:11] Speaker B: That's really a good point. That kind of leads me into. I'm working with an organization just now who are using DataOps live, by the way, and they had an old legacy team, more like a Data Warehouse team, 9, 10 people in their team, building legacy pipelines, SQL Server or Oracle. I forget what they were using, but essentially it took them, you know, for any new feature, any new loader, any new ingestion, it took on average three months. Right. To do with one FTE and actually, you know, moving to the data platform, the new data platform with Data Live with AWS and Snowflake, they reduced that to five days to build a data product. Three months to five days, yeah. So absolutely, it can work. A modern data platform can accelerate 100% your journey. [00:32:10] Speaker A: Yeah. All right, well, unfortunately, we're. We're out of time, Paul. It's like you and I can go on and on forever. [00:32:17] Speaker B: Yeah. [00:32:17] Speaker A: Yeah. [00:32:18] Speaker B: We need to extend it next time. [00:32:19] Speaker A: Yeah, yeah. So what's, what's next for you? Any events or meetups or anything that you're gonna. [00:32:28] Speaker B: I'll be off to the Winter Data Conference in, in March. It's the, the light data guys with Chris and, and the team. Oh, yeah, there's a few, few people will be there. [00:32:39] Speaker A: The one in Switzerland, right? The, the. [00:32:40] Speaker B: Yeah, they're moving it this year to Austria. So it was in Switzerland for the last two years. Moving to Austria. Austria, yeah. Used to be Skids. Now it's changed to the Winter Data Conference. So I'll be there in March. [00:32:54] Speaker A: Conference. Okay. [00:32:55] Speaker B: If anybody wants there. And you know, clearly the rest of the. The calendar, I need to check with my. Well, I need to check with. I was going to say my pa, but I need to check with my wife actually, to check. [00:33:09] Speaker A: I'm in that boat now too. Yes. [00:33:11] Speaker B: Yeah. [00:33:11] Speaker A: What's on my calendar? [00:33:14] Speaker B: Awesome. So, yeah, listen, I'm happy if anybody wants to reach out to me directly, then just ping me on LinkedIn direct and we can have a chat. You know, more than happy to do that. Right. [00:33:25] Speaker A: And there's a QR code there on screen for folks if they want to. You can scan that or you can just search for Paul on LinkedIn. I'm sure you'll find him. Well, if you're watching this event, there's a link to his profile right there on the event page already. Well, thanks for your insights and being my guest today, Paul, it's always great to chat with you. As always, thanks to everyone who's online for joining or those of you who are watching this as a replay. You can join me again in two weeks. My guest is going to be Barzan Mosfari, who is the CEO and co founder of Kibu AI. So this is going to be a really interesting discussion with him. And as always, be sure to like the replays from today's show and tell all your friends about the TrueDataOps podcast. Don't forget to go to truedataops.org subscribe to the podcast so you don't miss the Future updates episodes. So until next time, this is Kent Graziano, the Data Warrior, signing off for now. Ciao. [00:34:24] Speaker B: Thanks, kids. Bye.

Other Episodes