Episode 55

October 29, 2025

00:29:13

#TrueDataOps Podcast - Nadia Moses - S4.EP2

Hosted by

Kent Graziano
#TrueDataOps Podcast - Nadia Moses - S4.EP2
#TrueDataOps
#TrueDataOps Podcast - Nadia Moses - S4.EP2

Oct 29 2025 | 00:29:13

/

Show Notes

Join Keith Belanger, DataOps.live very own Field CTO & Snowflake Data Superhero as he meets & discusses everything data with a range of guests for our second episode on Season 4.

Nadia is Lead Data Engineer in the Self-Service Data Hub at Eutelsat.

She has an MSc in Data Science & a BSc in Mathematics. Having joined Eutelsat over 4yrs ago, Nadia now plays a key role in mentoring data engineers and shaping the data platform to support global connectivity.

Keith has over 30 years in data, leading multiple Snowflake cloud modernization initiatives at Fortune 100 companies and across diverse industries, specializing in Kimball, Data Vault 2.0, and both centralized and decentralized data strategies.

Sign up for one of our Hands On Labs today - see www.dataops.live website for info.

 

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. Hey, welcome to the True DataOps podcast. We'll just get started here in just a few moments, so we'll get everybody to get logged in and. Hello everyone. Welcome to another episode of the True Data Ops podcast. I'm your host, Keith Belanger, field CTO here at DataSound Live, and Snowflake, Data Superhero. Each episode we explore how DataOps is transforming the way organizations deliver trusted govern and AI ready data. If you have missed any of our previous episodes, you can always catch us on our data ops live YouTube channel. So subscribe like us and then you'll be notified about any upcoming sessions. So I'm really excited. My guest today is Nadia Moses, the lead data engineer at utelsat, a company that manages an extreme amount of data which we'll get into. Recently we went through. Recently they went through a large merger and are currently also a customer of Data Live. Welcome to the show, Nadia. Excited to have you. [00:01:01] Speaker B: Thank you so much for having me. I'm excited to be here. [00:01:04] Speaker A: Excellent. So for those of you who do not know, can you share a little bit more the background of your current background and kind of your journey at utelsat? [00:01:16] Speaker B: Yeah, I can give a bit of background on utsat as well. Yeah. So we're a satellite telecommunications company. We started off as sort of two separate entities, so OneWeb who do low earth orbit satellites, which is kind of where I started, and then users that have been around for a while that do geostationary satellites. And in 2023, as you mentioned, we went through a large merger. But about me, I sort of, I did sort of start over. Everyone kind of does, like not really knowing what direction to go and. But I really enjoyed mathematics. So I said I did my undergrad in mathematics, kind of was doing very pure mathematics statistics. And again it was just like, oh, what can I do with that? Obviously the world's my oyster now. But in one of my modules at university, did a model on machine learning and I was like, oh, that's kind of cool what people are doing with data. That's really cool. And then sort of the summer after I graduated, I actually did a bit of work experience at OneWeb, sort of just seeing how the system works. So I did just a bit of work experience in sort of the ground. The ground network operations. I did a bit of experience in the satellite operations and then also in the sort of security department. And just seeing like the sheer amounts of data and everything they were doing was just great. But then I decided to not go into sort of working I did. I went back and actually did my Master's in data science. And as part of that, as part of my dissertation, I could do an internship somewhere and do my dissertation on that. And I thought, where else is perfect than a satellite company? So I rejoined as an intern and I did sort of finish off my master's there and then ended up as sort of. Yeah. Into data engineering. At the time, it was the team where the data team at OneWeb was still starting off very early, so they were still building their data platform. So I kind of came in as sort of the data engineer, data analysis, data scientists. Not really sure what direction, but they were doing some great things with data engineering. It was a company called Datalytics, which work very closely with DataOps as well, and they were helping build the platform. And I think shortly after that, maybe end of 2022, their contract ended and it was kind of myself, ish and the sort of the product owner. But then it gave me this incredible opportunity to dive straight in and sort of start leading that data engineering team. And I've just seen it grown across the years and I've learned so much, obviously, but working with Snowflake, working with Data Ops, I found myself. I've just really grown into this role. [00:03:52] Speaker A: That's great. It's interesting, always hearing people's history into data. I like that. One thing in our past conversations is the sheer amount of data that Utilsat does process, give people a little bit understanding of how much data you guys are managing. [00:04:13] Speaker B: Yeah. So on the one website. Yeah, I can speak about the one website, but we started off with a few satellites and now I think we have something about like 600, 650 satellites in orbit. And obviously each one of those satellites generates data, generates telemetry, and I think we probably ingest and transform around a million rows of data per day. And as of sort of the last week or so, definitely since we did our big data London talk, we've actually hit the. The petabyte storage in Snowflake. I don't know if that's worth something worth celebrating, but. Yeah, it's just. [00:04:48] Speaker A: Yeah, yeah, that, that's. That's crazy. Now, you have been involved with, with the Data Utilsat before. You had Snowflake and Data Ops live. Give me a little bit like an understanding of what was it like before and then your journey and what is it now with your Snowflake and your DataOps implementation? [00:05:13] Speaker B: Yeah. So DataOps was actually a big part in developing our data platform. So before, before OneWeb didn't really have a data platform. It was sort of every individual team was doing their own thing. Like you had the satellite operations doing their own thing. You had the ground network doing their own thing and storing their own data. And that sort of created these silos where people couldn't really work between each other. And also I think it didn't really like help them see the potential of their data as well. So I think the idea for our team kind of cropped up around beginning of 2021, sort of when the data platform started and it was kind of, yeah, teams weren't like being able to share data, it was just inefficient and also it was just causing like deduplication of data. No one could really see their potential. But we sort of had three, three options which was carry on the way the things are working, which is obviously wasn't going to be helpful in the slightest. Or maybe start utilizing AWS more which yeah, AWS is great, a lot of people use AWS today, but it's something that you still have to be very technical for and even end users. Or it was sort of build a data platform which Snowflake and Data Ops really fit our use case in terms of scalability because as you mentioned, we have so much data and it was sort of the approach we wanted to take was self serve. So we don't do any of the reporting, we let the teams individually do that themselves. So we just do the process of bringing in data, governing it, making, doing all the business validation and then giving it to end users and be like, here you go. I think that really helped teams just like data exploration and everything and they're like discovering use cases that way themselves. [00:06:57] Speaker A: Right now I know you guys have gone in hand in hand with Snowflake and Data Ops from day one. My very first few Snowflake implementations we didn't have, I didn't have a Data Ops. [00:07:11] Speaker B: Yeah. [00:07:11] Speaker A: You know, I'd be curious from your take, if you were doing Snowflake without Data Ops, where you have gone like, what value has Data Ops brought to your ability to implement and manage your Snowflake initiatives? [00:07:26] Speaker B: Yeah, for sure. I can, I can sort of give a quote from someone earlier as well, kind of said that like the data team is actually really valuable in helping validate and test the data for the business. And I think that's something really key Data Ops allows us to do is put in these validation tests. And also we're sort of a team that's chopped and changed a lot over the Years, as I said. Like, we started off with datalytics doing our sort of consulting for us and then they all left and then we had to bring in new people who sort of had backgrounds of all different backgrounds or different levels of experience. But Data Ops is a really easy platform to use and to get accustomed to. And it's. Yeah, people have backgrounds in dbt, people have backgrounds in Snowflake or they don't have any, or they have like backgrounds in GitHub. And it's just really easy to use and it's able to sort of enable us to do that continuous integration, continuous development really easily. And I think as well we kind of have the way of working of like Agile, so we don't follow any sort of, sort of deployment schedules or like Sprints. So we're constantly churning out work, which is really useful for. [00:08:33] Speaker A: Right. And you're, you're able to churn out, you know, if you wanted to releases on a daily basis. Yeah, right. [00:08:40] Speaker B: And we do. Yeah. [00:08:42] Speaker A: But at the same time you're, you're getting your, I mean, so you're making sure you get your tests and everything already into those packages. Yeah. So where do you think you would be today if you had just gone and done Snowflake without that, that Data Ops kind of partnership? [00:09:01] Speaker B: I don't want to use the word mess because I do have faith in, I do have faith in data engineers and my team, but I think it definitely would have been a bit more difficult in terms of collaboration and governance around everything and permissions. Whereas with DataOps we have that all in one place. We're really able to, really easily able to collaborate. We can review each other's work, we can see each other's work easily. You know, we also have amazing support from DataOps Live. You guys are great. We always have support when we need it from you. So, yeah, I think we have. We'd be more unstructured with our. A good way to put it. [00:09:42] Speaker A: Yeah. How do you feel like, you know, I'm assuming like many business, the, your, your businesses want more and more faster and faster, you know, from you guys every day. How do you feel having Data Ops has been able to, you know, make you guys go faster, as you said? [00:10:05] Speaker B: Yeah, I think that's a good, that's a good point. Like, yeah, as you said. Exactly. They do want us to go faster and faster, but I guess it's having everything in one place and being able to do the development really easily. And also just comparing between different environments between. You could easily like send something to someone and be like, hey, can you check this out? Can you help me here? It just really enables collaboration and it's just really easy to sort of integrate with our current deployment and make changes if we need to, or even just introducing new products. It's really simple to do. [00:10:40] Speaker A: Yeah. Now, in DataOps practices, there are many aspects. We call them the seven pillars of data Ops. I'd be curious, what do you think has been the most valuable aspect of the tool set that you guys have been able to leverage? [00:10:55] Speaker B: Well, definitely CI cd, that's for sure the easiest. But I think also the observability is really great as well because we can easily monitor our pipelines, we have triggers for when they fail. Everyone can see everything and also just using the governance around it. I know I'm a big fan of the Snowflake Object Lifecycle Orchestrator, but that's been a real help. [00:11:20] Speaker A: Yeah, I know. When I had done, like I said, Snowflake implementations, and at one point I had close to 100 data engineers was probably the largest I'd have. And I'd be kind of curious is, you know, data engineers tend to have feel like they want to have a bit of freedom to do what they want to do and need to do. I'd be curious, do you see your teams? Do they feel like they've been handcuffed or they've been really embracing the capabilities that are provided to them? [00:11:49] Speaker B: Yeah, that's a good thing. I was going to say just you saying about teams of 100 data engineers, like, our team is quite small. There's about, I think, eight to 10 of us who are doing development. So yeah, I think everyone completely has the freedom to do what they want. I noticed people are doing a lot more. One of the things we focused on over the last year is really our cost optimization and where it was something we didn't necessarily focus on before. There's been a lot of work to do and the team has really been picking up on things they can utilize around Data Ops and doing different things, like different merge strategies and whatnot. So they're really having fun with that and playing and testing that all out. [00:12:29] Speaker A: Yeah, yeah, that's great. You know, one thing, and I think we were talking about this prior to this day's conversation, you know, your ability to create certain patterns and best practices and then be adopted across and not have to worry about, you know, people doing things differently. I'd be curious, like kind of talk a little bit more about how that has benefited you guys. [00:12:53] Speaker B: Yeah. So I Guess I can touch on our sort of architecture there. We took the approach from building the data platform that we would sort of divide our Snowflake accounts by, well, not Snowflake accounts, but tenants by our governance domain. So we, we used to have something ridiculous like 33 snowflake tenants, but for each of the Snowflake tenants we have a sort of matching data project for that that manages everything like that. But with Data Ops, what we did was we created a template project that sort of gives a standard template for all the sort of orchestration from Data Ops to Snowflake and sort of all the basic macros that we need in the setup. And what we, we can do if we wanted to create even more Snowflake tenants, which I personally don't want to do, but what we can easily do is clone that. And so we have like a template to start with. And also we sort of have standard, I guess, warehouses roles across all our projects. So we can easily manage that in our project we have as like a reference project. So we don't have to go into each and every single project and like change the warehouse size or change the sort of grants to a role. We can do that by sort of calling this reference project and doing it through that. So sort of. I think that's why as well, it was really easy for us to onboard people in our team and be like, hey, this is the kind of way we work. Like you. This is. We can teach you this. You're also free to go off and sort of design your own ways of working as well if you want to. [00:14:18] Speaker A: Right now do you have certain, like, as people are developing things and then they're going to release it into production, these standard tests that you're putting things through to make sure, like, hey, I'm the lead data engineer and I want to make sure this is this level of consistency that. But you can't review every piece of code manually, right? [00:14:42] Speaker B: Yeah, I think that's a really good thing and I think that's something I really learned becoming a lead data engineer as well. The person that was lead data engineer before me sort of paved the way for me and he was really great. But he kind of also taught me the small things like, you know, consistency across code when you've like your variable names sort of the way you do things. And it's also looking for things like is this performing optimally? Is this thing actually doing what you want it to do? So it's sort of looking for those tests and what's really sort of the way of working we've also done is that between environments, like I'll do the final review before something goes into production, but between the lower environments, the rest of the team reviews each other's code so they sort of will get that experience of reviewing code. And also I guess seeing how other people work on seeing how other people work and it really gives them a chance to be like, I guess fight in the corner and be like, this is why I've done this, this way. This is why I've done that, that way. [00:15:35] Speaker A: Yeah, it's great. Now my brain just went into a different area. One of the things I like and I'd be curious to get your take and your team's take on this is I implemented Snowflake without having a data ops and really wasn't able to take advantage of the zero copy cloning. One thing I love about DataOps is its ability to leverage the zero copy cloning that you can truly use production volume, production data. When you're doing your feature branching and even your environment management. I'd be curious like how your team and you and your team have leveraged that capability into what benefit it's bringing. [00:16:18] Speaker B: We do it all the time. The number of featured branches, number of zero databases. We love them. There's so many. We forget to delete them after we've used them. No, but yeah, that really helps our development. We kind of, the way we work, it's kind of like not all of our data sources have test data. So when we're sort of developing we are just using that production data, which is great. But yeah, kind of get the feel for when we're doing that as well. We get the feel for sort of the data quality if there's anything missing, if there's any more governance we need around things. So yeah, we, we love the zero copy cloud. [00:16:55] Speaker A: I think it's one of the, the most under leveraged and probably one of my favorite capabilities that Snowflake has had from like day one. And I've experienced a lot of organizations and the number of organizations that don't leverage it. You know, it's, it's great. You know, one thing also is, you know, Snowflake is more than just tables and views. [00:17:18] Speaker B: Yeah, exactly, yeah. [00:17:20] Speaker A: And are you leveraging data ops for the other like the RBAC or anything else within Snowflake itself as well? [00:17:28] Speaker B: Yeah, absolutely. Like that's why I said earlier about Soul like that it's been a game changer for us, like all of our permissions, all of our. So yeah, all of our account level objects, our warehouses, our users, our roles, that's all managed through data Ops. And all our permissions are managed through data ops. All of our databases and database objects are just managed in data ops. And I think it's really helped put it all in one place. [00:17:51] Speaker A: Right. [00:17:52] Speaker B: So that we completely leverage data ops for that. Before I can, I can specifically remember before the Snowflake object lifecycle orchestration was a thing and going into each Snowflake tenant, changing the warehouse size. [00:18:06] Speaker A: Yeah. Now are you guys using. Are you guys multi accounts with Snowflake or do you just have a single account with Snowflake? [00:18:13] Speaker B: So we have one orchestration account account and then tenants within that. Okay. So when we're data sharing, we do. It's kind of like a viewed as an internal data share, but we're controlling the data sharing between each tenant ourselves. [00:18:26] Speaker A: Right. And you're doing all of that right, with inside the data ops as well? [00:18:30] Speaker B: Yes. [00:18:30] Speaker A: Yeah, that's, you know, to me that really is trying to put a mental picture together. Right. All of this cross development and management, you know, and you said if you having to the nightmare, it could be as if you didn't have a central place to manage all of that. [00:18:47] Speaker B: I remember probably a couple of years ago we had a solution architect join our team and he was trying to do the diagram of the cobweb diagram, everything going together and he was just like, I can't. [00:18:59] Speaker A: Yeah, I've done it personally the hard way. We're trying to even the RBAC in the nested levels and trying to manage that code and deploy it. I want to change a little bit of pace. This has been the error where now I consider we're going into the era of AI in one of the big, big thing is many organizations. I'd be curious, you know, what is. Is AI in the horizon for utelsat and if so, kind of what. What is that future look like for you guys? [00:19:30] Speaker B: Yeah, for sure. So I think last month actually the team had a really successful AI day where they kind of got people from all of the business together. They got a few vendors and they kind of set this sort of date, the data and AI strategy for the future. And I think what was really good was they kind of brought in the reality of AI. It's not like obviously it's the big trend at the moment, but actually having a problem and AI is a solution for that problem. They kind of sort of really touched on that and saying like, you know, AI is not everything. But yes, we're happy to do AI if you have the use case for it. And it's sort of all the principles around that. So one of the main things around that was data governance and governing AI and that's sort of another really big initiative for us. So they kind of had this day where they got everyone from the business together, sort of defined potential use cases, prioritize those use cases and sort of looked what they could be and then it's like, okay, what are the next things around that? Data quality, data governance and all things to ensure that we actually have good data to be AI ready. [00:20:36] Speaker A: Yeah. So that gives me my next question. Right. We hear a lot about, I mean, I know from us we've been saying that, you know, being AI ready data, you see it, AI ready data. I'd be curious to get what, you know, kind of put you on the spot if I was to ask you what is your definition of AI ready data? What does it mean to. To utilsat? [00:20:54] Speaker B: Yeah, I guess from my perspective it's the old saying of rubbish in, rubbish out. Right. And I think something, yeah, something we've really been focusing on for the past year is, is data quality, is data governance and sort of, I've had a pain of doing sort of lineage and all this analysis and who has access to what and really focusing on our data governance has really helped. And I think when it comes to AI, AI is such a big thing and you do have to be careful with security and governance around it and make sure that it's doing what you intend it to do. So for me, being AI ready is having the sort of the right data governance and ensuring that your data quality is the best that it can be. Because data is never really going to be perfect. You are going to have some anomalies here and there, you are going to have some issues. But making sure that you're really focusing on data quality, fitting at source or fixing it where you can down along the pipeline, I think that's really important. [00:21:48] Speaker A: How do you see the role DataOps plays in all. Like you talked about quality and governance and all that other stuff. What role is DataOps playing in all of that? [00:22:01] Speaker B: Yeah, so I think one of the big ones is that we do all of our validation testing with DataOps. So that's really important. We have all of our tests defined in DataOps and then another one is we have all of our metadata stored in DataOps as well. So all the YAML files are about descriptions of what the data is. So all our permissions and governance are in DataOps as well. So I think it does actually really start there. [00:22:30] Speaker A: So you know Snowflake has a lot of capabilities, right? Cortex, are you looking to leverage. Are you guys already starting to play around with Cortex leverage? Cortex. [00:22:41] Speaker B: Yeah, not me personally, but yeah, there's definitely, there's definitely team members testing out the functionality. I know they're having some fun, but I also know that the issues they're facing are around data quality and data governance. [00:22:53] Speaker A: Yeah, you brought up earlier that you don't have, hey, we deploy once a week or whatever. My take on things is that as part of being AI ready is also being able to pivot when you do find something is bad. Being able to like, oh, that deploy didn't go well. We can't let that model go, go, go astray. And I think that is a kind of a, an important part as well. [00:23:24] Speaker B: Yeah, yeah, we really had a big like, we honestly had like the biggest initiative to fix data quality over the last couple of months and that's becoming, I think in the IT office a really big priority. So the focus is going towards making sure that our data quality is great and that's just not from just us in Snowflake, but really at source, from where the data is coming from. [00:23:47] Speaker A: Yeah, that's great. Let's take a little reflection and looking at to your future outlook. What would you say is the biggest lesson Learned on your DataOps journey so far that you had because you were there before Snowflake and DataOps, now you've gone through this, what would you say your biggest lesson learned from on that journey? [00:24:09] Speaker B: I think it's not rushing, I guess. I know I said we churn things out a lot, but it's also not rushing that process and also just making sure that you know what it is you're doing is intended for and it's actually working the way you want it to. Rather than doing the quick wins, the quick fixes or the quick governance, it's making sure that you do have everything right and it doesn't have to be right the first time. You can obviously do versions of something, but it's ensuring you have that mentality that you're not just going to get something out as a quick fix. [00:24:37] Speaker A: Yeah, I used to call it like fail fast. Right. Being able to. You don't need perfection. Right. Yeah. But what is really interesting, we bring that up because we kind of are we contradicting ourselves when we say, hey, for AI, you need good quality data. Right. And so it's to me it's going to be interesting in that balance between, you know, how do you iterate but at the same time, how do you make sure you're not. Yeah, I think it's poisoning your AI. [00:25:05] Speaker B: It's the mentality. Right. If you go in thinking we can do these like quick wins here and there, but if your intention is okay, do as best as we can to fix something. Yes. Issues may crop up and then you identify those issues and then you go fix it. It's sort of that. It's a brawl process of that. [00:25:21] Speaker A: Yeah. You know, I think part of it is also the culture, I think, where you guys have kind of been that way from day one. I've worked with organizations where you have to do a culture shift. You had to get people, people to embrace that all being on the same page. Right. Do you think you guys have that level of a culture at your organization? [00:25:43] Speaker B: I was gonna say, I was like, I don't think we had that mentality from the start. That's one of the lessons I've learned. But I think we're definitely getting there. I think. Yeah. As the 1 website, I think we're still kind of in that startup phase where we're sort of standardizing everything, make sure everything has processes and documentation, whereas you just have been doing that for years and years and years. And one of the things about the merger is we're also combining our ways of working and we're learning from each other. [00:26:08] Speaker A: Right, that's true. Because now you had really two different, completely different architectures and that could make for a whole another. Because I know at Big Data I talked about that and we won't get into here, but yeah, that, that, that kind of, that merger that brings in a big challenge. So before we really close it, I'm going to really have one. You know, like I said, I've worked with a lot of organizations that have Snowflake. I've done Snowflake implementations without DataOps as a practice. You know, where you guys have been using Data ops. What kind of advice or feedback would you give to an organization that might, that doesn't have a DataOps practice or even not using DataOps Live? Kind of. What would be your advice to these folks? [00:26:51] Speaker B: I think it would really invest in your data governance, I think would be, and sort of have sort of a plan of how you're going to do your data, sort of what you want your data. Like I guess we're talking in data products. What you want your data products to be sort of the controls you want to have around those Data products and also things like documentation, you know, document your metadata. We also use data ops to push metadata to our data catalog in Data World. So is sort of having a view where users can see like, oh, here's everything, this is what I want to use. And yeah, I think investing in data governance is a big one. Investing time. Time, money maybe, but definitely time. [00:27:33] Speaker A: It's interesting when you bring up data governance and people instantly start. I think a lot of people talk and think about catalogs, but they really don't think about governance in terms of their data engineering aspect and the best practices there for sure. So Nadia, what's next on the UTEL SATS data team? What's going on? Where are you guys going to be evolving to? What's on the plate for you guys coming up? [00:27:56] Speaker B: Yeah, so I think it is AI. I think we definitely want to start trying to utilize that at Useful sat. But right now it is ensuring that we have our data governance and data quality standardized and working towards fixing that. And then we can do some really, hopefully some really good AI use cases. [00:28:16] Speaker A: Great. Well, unfortunately we're coming up to the to the end of the half hour. It was a very, it was a pleasure. It's always a pleasure talking to you, Nadia and I enjoyed having you on. I want to thank everybody who is listening and if you're just listening for the first time and you're not with us live, don't forget to like and and subscribe to our YouTube channel so you can be up to date on any other upcoming episodes. Nadia, you have any final words, parting words for anybody? [00:28:45] Speaker B: Yeah, I guess. Yeah. Just don't rush AI. Invest in your data governance. Invest in your data quality. [00:28:52] Speaker A: Great. Thanks again everybody. You know, next time we'll continue our journey here on being AI ready data, featuring more practitioners and a lot of thought leaders in the industry. Until next time, this is Keith. Remember, good enough. Data is not AI ready data. Thanks everybody.

Other Episodes