Evan Estola - On recommendation systems going bad, hiring ML engineers, giving constructive feedback, filter bubbles and much more - #9 | Transcript

Evan Estola

You can see the show notes for this episode here.

This transcript was generated by an automated transcription service and hasn’t been fully proofread by a human. So expect some inaccuracies in the text.

Ronak: Evan, super excited to talk to you today. Welcome to the show.

Evan: Thank you very much for having me.

Ronak: So you had a director of engineering at today, and you have been working on machine learning systems for majority of her engineering career. Can you tell us about how you got started into machine learning?

Evan: Sure. So I think my path and machine learning is, has a lot of overlap with my path to becoming a software engineer at all. I I was like a math kid. I loved, you know, that side of, of school and I was never really a programmer. I didn’t get exposed to it super young or anything like that, but I did get exposed to like open source and Linux type stuff. So I was used to like tinkering around on my computer and I initially went to school for biomedical engineering. And when I was there, I made some friends some lifelong friends, people that I’m still a good friends with to this day. And they were computer science majors. And not only that, but they were taking a data mining class and they told me about data mining. And I was like, that’s the coolest I’ve ever heard of? Like, why wouldn’t I do that? So basically we just switched over to an after that and have been pursuing that ever since for listeners who are younger and I’ve never heard this old timey term data mining.

Ronak: Yeah. Yeah. I think it’ll be great to explain that.

Evan: I think mining was, was kind of it involved machine learning techniques, but it was really about any sort of systematic approach to using data sets to help make business decisions or, or anything like that. So it was a very sort of pragmatic angle into using data and into the machine learning world. And so that’s that’s what sort of sparked me from the very beginning was the, not just the tools and the, the algorithms and all that, but also the, the use of it and the sort of business or whatever you want to apply it to. I spent a lot of time in undergrad working for a information retrieval lab. I think there’s a couple of different ways that people get into the machine learning space, or there’s a couple of like types of labs that became machine learning labs. Like everybody’s machine learning now, but there’s the sort of AI side. And that has all this history from the eighties or whatever, and, and was not always using machine learning type approaches, fancy knowledge stuff for, or, you know, even the old school, like chess algorithms, all using sort of knowledge bases and this sort of thing. But that those groups got moved towards machine learning or rather are definitely calling themselves machine learning. Well, I guess AI’s cool again, too, but whatever. I came from this sort of information, information retrieval side. So I think that how that changes, the way I look at things is it’s always practical. It’s always about impacting the business for our customer or for a user or whatever. Not to say that people come in from the other side, don’t do that as well. But I think that’s always been the lens that I looked at applying machine learning through. That’s pretty cool. So you mentioned a little bit about the math background and actually I’ve heard different aspects from different people I’ve spoken with. Some will say that, Hey, to do machine learning effectively. You need to understand Matt. Whereas a lot of folks these days who will today, a lot of folks have the title of machine learning engineers. And some of those will say that, well, you need to know how to code and having a CS background is. If not the same, maybe a little more important than having an understanding all the math behind it. I’m curious how much of machine learning these days is math versus CSR? It depends like I’m just going to get your thoughts on this. Yeah, that’s a, that’s a great question. So I loved math. I’m not, I’m not sure I was good at it, or rather I thought I was good at it until I got to college and met people that were actually good at math in math. I can just think in math, I was like, okay, I’m I’m, I’m, I’m not, I guess I’m not a math person anymore, but I, I still liked math. I think an interesting thing is you can never know all the math, right? So when I was first starting machine learning stuff, the math we did was all linear algebra, discreet. And now with, you know, deep learning type stuff, it’s all back to continuous calculus, you know, differentiable functions and all this sort of thing. So you never gonna know, you’re never going to know all the math. You’re never gonna know the math that you need to know next. I would argue that you can never know too much statistics. Okay. Maybe you can know too much statistics, but unless you’re a statistics person, you could never know too much. And so, you know, especially working in a business and you’re often going to meet people with different backgrounds and need to explain things and need to just figure out how confident you are about something. The stats is all is always useful. So in terms of the engineering side, when I started out and I could probably tell a bunch of stories about how old I am or whatever, but when I started, when I started out, you had to, you had to really. No, a lot of engineering to do any of this stuff. So we were rolling our own Hadoop clusters and the amount of engineering that it took to process large amounts of data was pretty intense. Nowadays the tools have gotten a lot better. So there’s, there’s a lot more space for folks that are coming in just on, on the, either math or the sort of product plus math or whatever. So I think there’s a lot of valid ways to get into this and we kind of need, we kind of need everybody coming from all these different angles and you’re never gonna know everything.

Ronak: Yep. That makes sense. So, as in like, you would be spending a lot of time all of your time hiring being in this position that you are what sort of skill sets do you look for when you’re interviewing folks? Is it like depends on the kind of role you have on the team you were in Nixon, one side or the other, or you’re looking at the candidate holistically?

Evan: I think it, I think it depends. I think it’s, it’s a bit of both, so sometimes. We find people that are, that are just great and they seem like they’re excited to work on hard problems and, and, you know, especially one great thing that flat iron is we are always looking for good communicators. We work in such a cross-functional space, that communication and being able to explain things and being able to explain hard concepts, always super, super useful. Even back in my meetup days, we were, we had a pretty small team. And so some of the things I always looked for in engineers are, especially on the ML side, someone who can, someone who can approach a problem from a problem solving angle and not a I’m want to use this cool app side of things. Like we’ve kind of set up our interview questions to, to get people, to give us a nugget you’ve approach first, like just use search or just use, you know, just do right, right by right. By most popular, like don’t go right to it, doing the craziest algorithm because that first thing might work pretty well. And so it was always about sort of problems first and then. You know, especially the small company like that, being able to communicate with product teams, being able to communicate with the CEO. I always joke that the, like, I think, you know, maybe we’ll talk a bit more about like interpretability later, but I always joke that the CEO problem is if your, if your CEO gets a bad result from one of your algorithms, you better be able to explain it. Cause he might come right over to your desk, especially at a small company, but they’re also kind of the worst because the CEO has the weirdest data, they use the product for the weirdest thing. They’re not a normal user.

Guang: This conversation is getting a little bit too real for me, but, but let’s continue.

Ronak: So it’s it’s I understand that thinking of machine learning as a means to an end, instead of just thinking of like, Hey, I want to use this cool algorithm in terms of just working with the product itself. And like I said, communicating with people How do you think about learnability or teachability versus someone who has tons of experience coming into the job? Like how do you balance that?

Evan: Yeah, that’s a, that’s a great question. I think in our industry, anybody that has a lot of experience has learned a lot of stuff, be the, be the computer scientist as kind of being a permanent learner. You know, like I said, you never know all the math, you’re never going to know all the software packages. You know, when I talk to, when I talked to people that are trying to get into the industry as a whole, that’s the main thing I tell them. It’s just learn how to learn things, because you can check every box on a, on a resume or on an application in terms of the tools that you think that you need to know. You’re never going to know, even once you get there, even if you know all the packages they use, you’re still not going to know that code base everywhere you go, you have to learn. And so I definitely think that that’s a key aspect, but that’s not to say that people who have a lot of experience. They probably been through a lot of those before, so, yeah.

Ronak: And you also mentioned that from a machine learning perspective, it’s a lot more about practice and how it can help the business. Like being a director of engineering, where you are very close to the management and the business, I would imagine then as compared to being an IC how does it help you see the business aspect a little more closely? And how does it tell you influence some of the decisions you make in terms of like the direction the team is taking?

Evan: Oh yeah. Great question. I love, I love working with the business. I love working with my product partners. I love working with all the different cross-functional folks. Now that I’m in health tech, we have doctors and other clinicians, and it’s really a whole range of people that are, that are influencing things. I always love when problems. I love working on problems that are, that are complicated enough that knowing that technical solution or knowing a different technical approach changes the way you approach the problem. And so being able to get engineers who want to understand the problems we’re working on and can work cross-functionally and come into something and say, Hey, actually I know this technical, there’s this, there’s a possibility that if we look at it in this different lens, we can solve this problem in a way that that non-technical person never would have come up with. And so I love kind of bridging that gap and that’s a lot of what I do now in my role is just helping to frame the business problem, helping you communicate technical things to non-technical people. And really connect smart people to hard problems. That’s that’s my favorite thing to do.

Guang: That’s really cool. And how has that changed from your last job working at meetup? Because to me, while the benefits of working in like a deeply technical fuel, like ML or engineering, is that it is somewhat independent of the domain itself. And obviously to Excel, edit, you need to have a lot of domain expertise, but it’s usually not a prerequisite to get the job in the first place. Was there a steep learning curve that you felt like you had to kind of go through when you first joined flat iron or what

Evan: was that like? Yeah, that’s a great question. So this is, this is a question that’s near and dear to my heart, cause obviously a huge part of what I do is onboarding people into flat iron and absolutely it’s, it’s a steep learning curve and we don’t, we don’t look for. If we only looked for machine learning experts that also have an LP application and also have a knowledge of cancer, genomics, like we probably wouldn’t find many people. Luckily there are a lot of people that have an interest in it. Like I like to joke that flat iron probably has the highest percentage of people who thought they were going to be doctors and ended up as computer scientists or statisticians or whatever. But we, I think we have the highest percentage of those people in the world. Not that everyone at flattering comes from that background, but having some sort of interest in the biology side and the medical side helps a lot because there is a lot of just terminology to learn. I mean, that’s the first few months onboarding and flat iron is just learning a bunch of, you know, not only learning that technique, we were talking about learning the tools we use and all this stuff, but also just learning all of these medical concepts. It’s a, it’s a challenge, but it’s also just a ton of fun. And I think a lot of people in our, in our world are just curious and just want to know how things work. And so there’s a lot of opportunity to do

Guang: going, going from the nod, becoming a doctor to something else, you know, having Chinese parents, I can definitely relate to to how that feels. So,

Ronak: I was actually going to ask you, you already share some aspects of this but what does a typical day for you look like as the director of engineering at Flatiron?

Evan: Yeah, I’d say my, my job varies quite a bit. I get most of my work done through other people or working with other people. So naturally I’m spending a lot of time in meetings. But I, you know, I’m generally just trying to find the right people, put them together, help them understand what they should be working on. I I’ve I’ve I really love the, the management side of. Of my job. I love giving feedback. I love giving positive feedback. I love giving constructive feedback and, and I generally just like helping people see what they can do and, and to help them get there and then to help impact the impact the business. And I’ve been lucky enough to work on businesses that I believe that what we’re doing is also a good thing to do so impact the business impact the world. It works out.

Guang: So I actually, one of my personal goals is to get better at giving feedback. I find it pretty difficult especially constructive, right? Cause I feel like. Usually you need to have social capital. You need to build a lot of trust before, and you also want to be very precise about the feedback you give such that it doesn’t feel like, Oh, we just a feeling, but it’s based on evidence. How is that something that just came naturally to you? Or is that a skillset that you develop over time? Like w do you have any tips or advice for me?

Ronak: Actually, I would say plus one to that question.

Evan: I mean, I think, I think, I think you’ve nailed a lot of things. Right. Good, good feedback is specific and actionable and comes from a place of care. I think you’re right that, you know, you can’t, you can’t just jump in and start giving people critical feedback when they don’t know anything about you and they don’t trust you. You have, you have to get to that place of trust with people where they trust that you have their interests in mind as well when you’re giving that feedback. So I, I wouldn’t say it comes. Natural. I sometimes joke my dad’s an American football coach. That’s his, that’s his profession. So I certainly grew up in a household with probably more constructive feedback. My dad has, my dad has no problem yelling at people. Obviously I, I frame things very differently than that, but I think that, I guess, I guess the, where that really connects is I remember when I was a kid and I remember, you know, coach kind of getting down on me for something I did on the football field. And my dad said, Hey, you know, if the coach, if the coach is getting on you, that means it means he thinks you’re good. It means he cares because he wants you to be good for the team. If he, if he was going to bench you, you wouldn’t be wasting his time on you. So that’s, that’s kinda where I learned to love and seek critical feedback myself. And I think that’s helped me. Learn to see that in others and see how critical feedback is good for people. And, and to when you really know you’ve got it is when you’ve given somebody like critical feedback and, and it increases your relationship instead of, instead of costing obviously you have to have a relationship first. That’s not, that’s not, you can’t, you can’t use it from nothing, but but once you’ve gone far enough, you can actually develop more trust with somebody by giving them perfect feedback, because they know that you care. And they know that you’re, you’re willing to have an awkward conversation to help them out. And so it has to come, it has to come from a place of caring. I wish

Guang: I had that wisdom when my mom was yelling at me when I was growing up, probably would have

Ronak: turned out better. I know we’re digressing, but I have a follow-up on that. So one aspect is when there’s a feedback conversation, when aspect is giving constructive feedback and doing it. The right way. I think the other aspect of the conversation is also on the other side, where you’re the person who is willing to hear the feedback and understand that it’s coming from a place of care. And this person wants me to improve. I am assuming there might be situations where the acceptance of feedback is not immediate, or at least there is more on like, Hey, why do you think so, how would you, like, how do you handle those conversations in that case?

Evan: Yeah, that’s

Ronak: a great question. I mean, that’s the first concern I have if I’m thinking about constructive feedback is like, what if this person doesn’t believe what I say? How do you get over that?

Evan: Oh yeah. Great point. I didn’t certainly when I was, when I was starting off as a manager, say any of this came naturally, it also, I I’ve screwed this up many times. It’s, you know, probably some former report of mine could listen to this and go, Oh, that guy wasn’t sure. So it’s, it’s something that I’ve, that I’ve. Definitely had to develop over time. I think there’s a couple of things that make it, make it easier. One thing that’s been really great is flat iron has a really well developed. I think a lot of companies are getting better at this. We have a great career ladder. I know, I know it’s really sounding like a director of engineering on your ladder. A good career ladder is a beautiful thing because you can point to things in that document. You can help people understand it over time. And like that really helps to sort of contextualize things and that I think can make things more clear. And, and that helps a lot with the receiver of the feedback, knowing why you’re giving them this feedback, how that fits in with the bigger picture, that sort of thing.

Guang: Got it. Got it. Really cool. Cool. So changing gears a little bit here on the podcast, we love to hear stories you know, very excited to hear some of your stories today about recommendation systems going bad. Before we before we start, can you give us kind of a TLDR on what a recommender system is and how does it work?

Evan: Yeah. Yeah. I love that. Because I think it’s, it’s not, it’s not as, it’s not as obvious as it seems. I mean, I imagine some people are like, Hey, I recommend a system, Amazon people who like people who like this also about this. Right. And but I, I think there’s actually a lot, a lot more to it than, than leads the initial eyes. In general, a recommender system is I think any time, anytime you have more items that a user could potentially engage with than you have attention. For that user to spend finding something, to engage that. So, you know, the, the classic example is a little three, three boxes at the bottom of a product page that says, Hey, if you liked this, you might also like these things, but it goes all the way up to, you know, Netflix, Netflix have a recommender system of recommender systems. They build that full page. The list of lists, the whole thing is, is optimized all together. From what I understand. So there’s, there’s a lot that goes into it. So I’ll recommend system, you know, in terms of how they work, they can be anything from a simple graph walk. Like I said, people who like this also like this, even saying the words graph hoc makes that sound more complicated than it is, right? Like you literally could just like, look up. If you have that data store, you can just look up whoever likes these other things. A lot of times you can model a recommendation algorithm as a, even just a classification problem. If, if I put this in front of somebody, will they click it? And then when, you know, the certain next level up is you look at it as a ranking problem. It’s like, okay, we have, we have N number of impressions that we can give. How do we put the best things that this user might engage with into those spots? And do we just put the things we think they’re most likely to engage with? Do we use that space to give them new things that we want to know if they’re going to like them or not? Do we use that space to make sure they have a variety of options because maybe they’re in different moods when they come to engage with them with the product. So there’s a lot of different ways that it kind of comes together. We didn’t, we did I worked my first ever job building recommended systems was at orbits. I built a little hotel recommendations modules. So if you’re looking at a hotel, we showed three other hotels that you might like. And one of the most successful algorithms we ever deployed for that we. Found a hotel that was similar to the one you were looking at, we found a hotel that was at least $20 cheaper than the hotel you were looking at. And we found a hotel, there was a least a star rating up from the hotel you were looking at. And it was this sort of Goldilocks sort of scenario from whatever buying psychology land. And that was one of the most successful albums that we ever did was kind of so recommender systems nowadays, I’m sure you’d be doing all sorts of personalization and machine learning for that. But even just sort of that handcrafting thing that’s really played a role in, in making that successful.

Guang: So, so yeah, so speaking of orbits, our, our first story starts with the wall street journal article from a while back called orbits steers, Mac users to price your hotels. So, so yeah, what happened.

Evan: So I CA I can’t remember the exact details, but I think the first version of that headline was even worse. I think the first words that had been really made it seem like Orbitz was actually charging Mac users more. So my story for this is, is strange because I, I was actually not at orbits when that article came out, I was in my first week at meetup. So I had just left orbits and I’m not there. And so I can’t speak to the response or what happened inside or anything like that. But I just remember, like, I just moved to New York city, I’m in my new office and now there’s this national news story, this like wall street journal published it the next day. It’s on good morning, America. This was honestly looking back on it. This was one of the first big, like, is bad news. Like it’s crazy and need to think about how, like, you know, that’s a whole style of journalism, not, but like, this was one of the first kind of big ones I think. The story has, I know it is that we had a group, the team that I was working on this, I was a junior engineer pretty much, a couple of years out of school, freshman school, the team I was working with was exploring different data points that we had available to us. A lot of people browsing orbits are not logged in. So we can’t necessarily tie you to your, your user history and all that. It was hard to tie people to use our history. Anyways, like I said, we were rolling our own Hadoop clusters. It was hard, hard. But the the, one of the things that we had available to us was the user agent. So we had the, you know, we knew your operating system, your browser and that sort of thing. So, like I said, I was working on that hotel recommendations module, basically three, three boxes at the bottom of your hotel screen. And we did a AB test of basically doing that, that kind of very simplistic graph lock algorithm, where we took the hotel you were looking at, and we’ve looked at people who looked at that hotel, what hotels did they end up booking? And we showed you the hotels that people were most likely to book after looking at the hotel you’re looking at. And then we thought, Oh, well we know this user agent. Maybe we can segregate the data that we use. So for Mac users, we’ll only use Mac data. And for PC users were only used PC deck. And we had some reason to think that might work because we had done some data analysis that had shown that Mac users tended to spend like on average 20 or something. Maybe it wasn’t, maybe it was a lot. It might’ve been like a hundred dollars more per hotel on average hotel night on average. So we knew that Mac is, and this makes sense, you know, Mac was two to three, four times as much as a PC. So that kind of made sense. So with our intuition, we deployed the algorithm and it failed the AB test. So I turned it off. In hindsight, I think the fact that the user had already clicked on a hotel, they pretty much already given us their price point when they clicked the first hotel. So we they’d already given us more information than their browser was going to tell us about how much money they were looking to spend on this room. So the flip side then is I think, I think, yeah, I don’t know who it was or whatever, but my, my, the sense I get was that wall street journal came in and was like, Hey, we’d love to do a story on you. And all this was like, Oh cool. We have this, we have this team doing all this really cool data stuff and doing all this really smart stuff with data. And, you know, here’s one of the cool things we found is that, you know, people on max spend more on hotels than PC users do. So we’re using that to like, you know, to influence our algorithms or whatever. I think the article implies that it was used in search. As far as I know, it was never used in search. As far as I know, the only time we ever used it was in that recommendation engine AB test that lost. And I turned off and it was, it was national news story. Like I said, it wasn’t like good morning America. That’s a, that’s a crossover. You know, that’s a, that’s a, that’s a cultural touchpoint. That’s not just the tech or business world.

Guang: I feel like one of the takeaways for me is kind of ties back to what you were saying before about can’t know enough statistics. Cause I feel like a lot of the concepts in the stats is it’s not super intuitive, but. You, so it takes time to become comfortable with the with the concepts like here, I don’t know, you know, causal inference is the reason why sort of that gap in terms of like how we’re using it versus how, you know, journalists are perceiving it. But I definitely see those kind of discrepancies pop up where, and then you kind of causes mass confusion in terms of like, okay, w w what, what what’s happening here. So, so that’s

Evan: the takeaway I’ve always had from it. I think that’s totally valid. The other takeaway that I’ve sort of reflected on over the years was that the problem was, I suspect whoever shared this information with the wall street journal they did so willingly, by the way, they thought it was going to be an article about how smart and cool the team’s work was. And I think the problem was it was about how smart and cool the team’s work was and not about what value it was providing to the customer. And so if you frame something as the value that it’s providing for someone, it’s a lot, it’s a lot harder to be accidentally taken as, Oh, we’re tricking people or we’re doing this, we’re doing this thing to, to scam people. So I I’ve always used that as a motivation to like continue to keeping the customer

Guang: in mind. That’s that’s really well said. Cause then you’re not just distracted by sort of this shiny yeah, I’ll read them without looking at what actually comes out. Any, any other stories a lot of these days.

Evan: So I meet up, we had, we had we had a couple that I, that I enjoy. One of my favorites regard, Schenectady New York have either of you ever heard of Schenectady New York? Nope. So it’s connected in New York as a small town, essentially right outside of Albany, the capital city of New York. And Schenectady, now that you’ve heard of it, I bet you’ll see it somewhere someday because it frequently pops up anytime you’re doing anytime you’re looking at cities and when you’re collecting your geography data in a certain way. And so I’m, I’m not giving it away just yet in case you or the listeners want to try and figure it out. But so I’ll give you an example. We did a, the first time I ran into Schenectady, we’re just trying to figure out what are the biggest cities for using meetup in the, in the country. So pulled some geography, pulled like census data and then took our data, divided it, and Schenectady pops to the top. There was a higher percentage of the population of Schenectady we’re using meetup than anywhere else in the world. I’ve seen this as well. You remember Ashley Madison, it’s like a dating website that had some sort of. Oh,

Guang: yeah, the, the, the, the, the adultery thingy.

Evan: I think that was the, I think that was the gist. I’m not deeply familiar or anything, but I,

Guang: that was a trick question there, but okay.

Evan: I saw it. I saw a news story once that said one of the top cities for Ashley Madison users. And it was like LA New York Schenectady. So this pops up any guesses and figured out, I figured out why this, this happens yet

Guang: are a lot of the things being routed as through it. So it’s like the data’s collected there, even though the users are not actually there. Is that something like, yeah,

Evan: that’s a good guest. That’s a good guest. So the, the, the trick is it’s actually user generated data and it’s user generated. If you ask the user for their zip code, because a percentage of people, when you ask them for their zip code are going to type in one, two, three, four, five.

Ronak: Is that the actual zip code for the city?

Evan: So it’s not even the zip code for the main city. It’s actually a zip code for a GE plant that sits in the city and GE got this. It was like an honorary zip code, the holes, the background, the story is zip codes is fascinating, but they didn’t used to exist. And now they’re like the main way a letter gets to where it’s going. Like, you can write anything you want as the city, but if you put a zip code, that’s the post office, it goes to. So GE was given this honorary zip code when they were first given out zip codes, I guess. And now they have to staff this huge mailroom because anybody that puts one, two, three, four, five on a letter letters to Santa, like. They all end up at this mail room at GE. I hear a friend of a friend of Bibles. It was actually a journey journalists did already at what point did a whole story on this and like interviewed people at the newsroom and stuff. But yeah. So keep an eye out for us connect to the it’s. It’s really just a parable of make sure your data is clean and don’t trust user generated data. But yeah, there’s a lot of those.

Guang: No, that’s actually kind of cool because part of it is sort of, you know, we discovered it is sanity check, right? It’s like you’ve done the analysis and then it doesn’t. And I think, I mean, that’s kind of a commentary to a lot of, you know, machine learning products or problems you want to solve. Right. Cause I can imagine, you know, me working on this 2:00 AM, you know, filling a ticket and then he’s like, Oh yeah, let me just pull up the top 10 and then all right. Looks good to me. Do, do, do you like have sort of a process in terms of like others then just, you know, cause I think you do have to care a lot about the problem and also having a good standard to your work. Right. In order to like examine these things this case you could be very obvious, but sometimes you said he’s more subtle. How do you, I’m curious, like how, how do you kind of go about getting people to do like a lot of sanity checks.

Evan: Yeah, good question. One of my former coworkers, Randy, I think he’s Randy underscore AAU on Twitter is great follow. I think he, he is always described his job as either counting things or data cleaning or something like that. And he has like, one of his con constant refrains is know your data. And so if you’re, if you’re going to be trying to do something with, with data, you can’t spend too much time getting to know that data. And often the best way to get to know data is just, you know, run some top tens, check it out, check the how long, how long does the tail go check the bottom of it? And just, you know, no, no, no. What you’re working with building up that intuition. And like we said earlier, this is one of the big challenges with a place like flat iron. It’s very hard to get to know of insanely complicated cancer data, but it’s really the only path to building up that intuition. And it really becomes a sort of super power of being able to understand. Know, what, what am I expecting my model to see? Because I know the kind of underlying data and what that means for a flat iron.

Guang: You led the team at meet up. And I can’t imagine one of the core ML problems there, right. Is how do you recommend the best meetups for users? And I think you were telling us like, well, the times this didn’t exactly happen as intended

Evan: what happened? Yeah. So one time my team put out an algorithm change and we had a pretty good workflow. So we could, we could launch stuff pretty quickly. And we put out a change. I probably code reviewed it myself. It looks good. And We ended up reversing, reversing our recommendations. So the way we did recommendations of meetup is we would select all of the meetup groups near you is geography bond. We’d find all the meetups near you. And then we had enough time. We, we, our, our algorithms were fast enough and our model was simple enough, simple enough. We could score every, every meetup in your area and then just sort them and show you the top, top three or five or however many spots we had on that page. And we could even, we could even reverse that and say for a given group who are the people that are most likely to join and that’s who we would like email when a new group started. So it’s using basically the same album. But yeah, at one point we accidentally put out a change that reversed the order. So we were literally showing them the mathematically worst meetups for all of our users. And, you know, most of it was most of it, it didn’t reveal anything too deep about the psyche. It wasn’t like it wasn’t like we found like, who was the opposite of you? It was mostly just sewing, just like kind of the most garbage things on, on the site, you know, that most empty things are the most sort of like weird spammy things and that sort of thing. Which was good. It actually made us feel really good about our recommendation system. We were like, Hey, I think this thing’s actually working, looking at the bottom really tells you something.

Guang: Yeah. And how did you guys discover the problem?

Evan: Like after I think, I think we just discovered it just by looking at looking at the recommendations. I don’t think we had, I don’t think we, I don’t think we had time to notice from the user engagement. We had a lot of graphs about, you know, how, how much are people, all of our AB testing stuff was all tied into the. To the systems. And so we monitor them constantly. I mean, this was, it was a relatively small team and the tools, all the tools that exist today didn’t quite exist yet. And so I just spent a lot of my life in graphite, just look looking at, looking at graphs of how things were going. But I don’t even think we noticed it in the, in the monitoring because I think somebody pretty immediately noticed they were like, these look bad. And I think we knew enough about the release that had happened to have a guess as to like, Ooh, it was probably that code. And then pretty quickly figured out that like, Oh man, we’re just. It’s literally the minus sign on this

Guang: kind of extrapolating on that. So for software, I feel it’s more straightforward to monitor for when something breaks. Austin’s not here today. Otherwise he’d be rolling his eyes at me, I think at that statement having been worked on monitoring for, but but you get an error right. At the end of the day, usually, you know, compilation or runtime or something else, but then for ML, I do think it’s a lot harder because everything can compile just fine. But then, you know, the model can be spewing out sort of total announces. I think there are easier cases, right? If you’re trying to catch fraud and everything is flagging it as fraud, then you know, maybe it doesn’t even pass your CIP process. And maybe you, you ask some kind of rule to say, like, you know, if this doesn’t just look like completely out of whack, you know, we need to stop, but before some of the more subtle sort of things you know, it becomes a lot more difficult, but. Yeah, I guess to gear thought around like, you know, debugging some of these problems and things like that.

Evan: Yeah. I think one, one, one thought that that brings to mind is related to just deploying machine learning models and using them in production and AB testing things in general. One of the great, one of the things that I think everybody who deploys ML models in a consumer scenario, especially on your sort of recommender system or search or anything like that. I think a lot of people, I mean, maybe not everybody, I’m sure there’s problems where this doesn’t apply to, but a lot of people eventually run into a place where your offline model performance is not predictive of the online performance. And there’s sort of a key, especially in this sort of user consumer scenario, the sort of a key reason why that happens and it’s. Users, can’t find a show that they’ve never heard of on Spotify. I, I will never click on an artist that I’ve never heard of, that I’ve never seen. If it’s never been recommended to me, if I haven’t searched for them, if they’re not related to another artist that I already listened to, I’ll never hear about them. So there’s no way I would ever find them. And that’s inherently going to influence your model. So where your model just can’t know, I can’t know how people feel about things that you’ve never exposed them to. And so. And some level, any, any way you can get new information into a model, if you can expand the sort of, you know, like I said, using that take pick Spotify example, take an artist that I do listen to, and then find the other artists that are similar to that artist. Like that’s, there’s a reason that these sort of graph lock things often become a component in recommended systems. So you’re trying to like find new things, but if you’re doing, if you’re approaching it as a strictly classification problem, will this person like this artist, or will they like this artist, you can only know that I’ve liked artists that I’ve listened to before. And so your offline model might be great and it’s not going to necessarily work once you deploy it. I think that’s how most people learn this as they get an offline model that just crushes it. It’s like, Oh, it’s 5% improvement over the previous model and you deploy it no better. And then the flip side of that, you realize like, Oh man, some of those models that weren’t, you know, maybe not slam dunks offline. Maybe they would actually work if we deployed them because they used some new data source. So they did something interesting. They had some new, interesting ideas. So that’s one of the huge things you have to test everything I’ll find performance does not predict online performance, especially in a scenario where there’s more items than anyone’s ever able to interact with. I think you have to have monitoring around all those things, monitoring, you know, the click-through rate on your algorithms is super important. In the world that I’m in now, we don’t deploy algorithms that are used by consumers, but we, we do have like data sources that are increasing all the time. And so we are constantly retraining our algorithms against new data to make sure that they’re still hitting the same performance characteristics that we expect. Make sure we’re not introducing new bias into our algorithms or anything like that. So there’s, there’s a lot of different things and it depends on the problem that your machine learning algorithm is solving. But I definitely think. Continually monitoring performance is an important factor.

Guang: That’s really interesting. So would you say that a lot of Oregon, I guess undervalue the the, the infra, like how important the Inpro piece? Because what we were saying right, is that things might not work in offline as well in, in online. So we have to run everything through you know, through prod, but then that means setting up the infra, such that maybe you can test out multiple channels and then, you know, do your things, but then you also need to, if you’re doing sort of a CI CD pipeline, you also need to have really good testing coverage and you know, all of these sorts of things. And then, like you said, also monitoring, but that’s also now trivial again, because you need to probably run through some samples and then maybe you have like a golden test set that you always run and then you had to look at distributions. So, and then that, that feels like a lot of that just, you know, it’s just info work, right. It’s not specific at all. And is that something that I guess, yeah, that’s, that’s been the

Evan: case. Absolutely. I mean, I know, I know that your, the, your, your backgrounds are in data engineering. I know a lot of other people that have been on the on the podcast come from the sort of infrastructure side and, and they’ve talked to a couple of other people about chaos engineering and that sort of stuff. And it is, yeah, I can’t understate how important the infrastructure side is, especially AB testing. I really feel like, I, I know I’m sure there’s a bunch of different ways to go about it. I don’t know. I I’ve been out of this world for a couple of years, but I don’t know if there’s like a product now that you should use for AB testing and feature flags. And if all that stuff kind of like fits together, but there’s really a lot of overlap between those things. And so I think, you know, trying to approach them, I hope everybody’s not still relaunching their own. I swear. I probably wrote like three or four AB testing frameworks. I hope I don’t have to do another one, but yeah, it’s super important. And like you said, it’s such a Crossing over. It’s such a, it’s such a intersection of the product side, you know, building good monitoring. You’ve got to know what’s important about the product infrastructure side, the algorithm and data side. It all, it all comes together.

Ronak: So this is something that might be obvious for folks who work on machine learning, but I don’t, so it’s not obvious for me, but can you share some examples of like, how do you monitor performance of the model that’s in production? Like for, again, for software systems, I know, well, counters, gauges, latencies. I can think of the obvious things, but what are some of those, like, not obvious are some of the obvious things in the amyloid?

Evan: Yeah, I think I think most of the stuff is similar. You’re, you know, you just count the number of people that interact with a given module. And if people are usually, you know, usually 5% of people on a page interact with this module and all of a sudden that drops. Distinctly, you probably got a problem there. And one of the problems is that things, things are rarely gonna go to zero and sometimes, sometimes it’s a subtle effect. Can be I saw a great talk from someone at Uber once who was talking about all the things they do to predict traffic and all the monitoring that they put in to try and look for spikes and traffic and all this sort of thing. And down to the level, Oh, there’s a Rihanna concert just got out. So now there’s a spike of traffic in this area. And I thought that was really cool and that I never got around yeah. To, to building like that good of a system. But that’s what I’d love to have is like basically over time, what are the fluctuations in engagement with this module? Because you know, it’s going to vary by location and by all these features, and that’s what makes it really hard to detect. Like you could, you could tank your algorithm in Texas, but if that’s 1% of your users. You might not notice that difference in your, in your metrics. You might just think, Oh, there’s not many great things this weekend, or maybe it’s a holiday. Like, it’s very hard to tell the difference between holiday or some other thing where the engagement goes down. So yeah, there’s no, there’s no, unfortunately like easy playbook that I know of for like, Oh, here’s the things that you need to make sure you’re monitoring as well. But there’s definitely a lot of things you can take into account to get to the ideal system.

Ronak: So being on the infra side, I love the fact that you are, Oh, you’re talking about monitoring and you actually cared about it. In, in the, in where we usually have like, Hey, there’s a production checklist. You have to go through before launching something in production. And one of the things actually is, do you have metrics that you can monitor? And if the system goes down, you need to know your, you don’t need the customer to find out that the system went down. Do ML teams also have similar if not checklist, but similar procedures to save well before shipping the model, you have to make sure you have the right metrics they’re looking at.

Evan: Yeah, I think there’s a lot of, I think, especially on this. On this more than anything else. At least, like I said, coming from the land where our data engineering team and our ML team were the same team. We were called the data team. We have to do both because, you know, we were the team that wanted to start tracking clicks so that we could use it as a feature in our, in our recommendation algorithm. So we had to build the data warehouse to be ordered, to store all those clicks somewhere. And so we always want it, you know, we, we wanted to track and monitor everything because, you know, if you could track, if you can monitor something and then hopefully you’re, you’re, you’re keeping track of it. And if you can keep track of it, then you can, you know, hopefully use it as a feature in your model. So I think there’s a lot of overlap between those things.

Guang: That’s, that’s pretty cool actually. Right. I remember you were talking about this. So having a team that’s composed of both sort of people working on ML, but also data because it is so tied together How is that your general philosophy in terms of like, for most like ML teams too? Cause right. One issue I can imagine is getting people that are specifically working on ML to care about both the production aspect, as well as like how the data, you know, he’s created, but then also pushing people who are working on data to also care about like, Hey, you know, where are you going to shove all these data into? Right. Has that been a challenge

Evan: or, yeah. And I’ve already tried to make the case though, that I really want to hire machine learning. People who really deeply care about the product space as well. Right. So care about the emphasize and the product side and the data and the machine learning. Yeah, it’s definitely hard to, to, to cover all those things. And I think composing teams of people that care about different aspects of that and can share that with that knowledge and interest with other people. And, you know, it certainly, as you scale, like most of the lessons that I learned came from working at meetup, where we had a pretty small team at times, you know, it was like three of us, probably at one point up to, you know, maybe 10 by the end. But like yeah, so it was, it was, it was pretty, pretty, pretty tight knit. And we knew who cared about what, and it was pretty easy to remind somebody like, Hey, you gotta let me know when you’re doing that thing. Cause I, you know, do this other thing. And it’s hard. I don’t know the answers to how to scale this up indefinitely. Certainly the, the, the really big organizations that, that seemed to do this very well, obviously your Facebooks and your Googles, they just, they, you know, they, they just have a ton of, there’s a reason why, you know, there’s a reason why we get paid to do what we do is because there’s a limited number of people that can do all these things. And, you know, in Google wants all of them if they could. So it’s it’s, it’s hard to find people that, that can put all these, these aspects together, but, you know, as you grow, then you can hire more and more specialized people build more specialized teams and, and try and divide out those problems. But it’s definitely good to know, at least a little bit of all of these different factors, at least know, they exist at least know that somebody cares about them, because then you, you hopefully won’t have a gap in your approach. And our

Guang: last story I think is about you were speaking at conferences about racist and sexist algorithms, so Moto fairness interpretability. Yeah. Tell, tell us more. What

Evan: happened after that? My, my experience working in deploying recommendation algorithms and that sort of thing. I put two, I put together a talk called when recommender systems go bad. And I went around talking about a bunch of basically had examples of times where companies have built algorithms that then turned out like, Oh, you, you trained it out for them on Twitter data. And now it says a bunch of racial stuff and it’s like, well, yeah, you probably shouldn’t have trained it on a bunch of racist data. But then from there all the way up to like really scary stuff, models for predicting recidivism, like basically models that predict whether someone that’s been in jail is going to commit another crime and, and how these things can be impacted by, by, by race and stuff. Weather. You know, whether it’s out of ignorance or, or whatever like that can happen. And it’s in fact, I think as someone who builds machine learning algorithms, I think we need to have this top of mind in our day to day and really think a lot about how, you know, is this model going to be biased, not just in the ML sense. Like we talk about bias and the machine learning sense of like, you know, over, over, over relying on certain data or whatever, but biased in the societal sense. And you know, one of the problems with that is that society is biased. And so it is hard to pull that out of your data and, and get your model to not learn it and to not perpetuate that. But I used to end all my talks by saying, you know, racist computers are a bad idea. Don’t let your company invent racist computers. Which I feel pretty, I think most people would agree with that. But it’s hard. It’s hard to figure out how. How to do that. And so I don’t, I never had all the answers, but I definitely just wanted people to be aware of it and think about it and think how, you know, like there’s problems with society. And we probably, we probably shouldn’t encapsulate that in our algorithms if we can avoid it. Now w did I always do this perfectly myself? Absolutely not. At one point, I think I’d already been giving this talk for awhile and I was at work one day and one of our community members said, Hey, I just got an email from an organizer. And they were kind of offended by the top of recommendations that they got. So one of the ways that we got around trying to bootstrap our algorithms with data was we would have people pick topics. We have this big topic graph on meetup. Just things that you’re interested in, snowboarding, knitting, whatever and organizers who were starting a meetup group could pick what topics they, the meetup group was about. And users could pick what meetup groups, they, or what topics they were interested in. And that’s kind of the basic way that we’d been strapped our recommender system. So the out the algorithm for this particular user, the person starting to meet a group was trying to start a group for women’s business networking. Cool. And then the topics that we recommended, I think that I think the organizer had picked like women business owners or something like that as their sort of core topic. And then from there we’d recommended fashion, shopping, skincare, makeup, and that was probably pretty offensive to that person. Cause they’re trying to start a business and we gave all these sort of generically like stereotypical topic recommendations. And so, you know, looking at that, we thought, okay, what are the inputs in this algorithm? Why is it doing this? We realized that the topic recommendation algorithm was based on user preferences, not on basically what, so, you know, this sort of classic graph block people who did this offices did this collaborative filtering and recommendation algorithm. We were basing it on user preferences. So users that had picked women business owners were also likely to pick fashion, shopping, et cetera. But groups that were started about women, business owners were not about all of those other things. And so we just had to change what data we were putting into that algorithm and base it on the group topic graph instead of the user topic graph. And then we got much, much less sort of stereotyping and and, and gender sort of recommendations. But these things are hard. There’s no like you can’t predict all the different ways that it’ll happen. Yeah, that’s that was, that was one of the times where I had to.

Ronak: It’s an interesting that, I mean, what goes in, comes out in, in this case, you mentioned that an ML model can reflect the reality of the society. The biases or inherent biases that exist in the data itself will come across, even though unintentional. There’s the other aspect of it as well, where a lot of the information we consume these days is through recommendation systems are systems which are powered through machine learning. Like the movies you watch, the news, you read the RGC, like all of it. So it also creates this information bubble around you where it kind of influences what, the way you see the world on a daily basis. How have you, or do you have any other thoughts on this in terms of like, How has this information bubble being been affecting the society? I mean, we don’t have a talk about the elections that happened over the last four years. There are a lot of documentaries about that, but in general, like how as an industry, are we even recognizing that, Hey, there is a problem which kind of we created for ourselves even though unintentionally, how can we make that better?

Evan: Yeah. It’s such a tough question. I think, you know, algorithms are just designed to maximize engagement, right? Everybody just wants people to like their website and use it because it’s had revenue or whatever, get up to solves, whatever thing you’re trying to do. I think maximizing engagement is not an algorithm only thing, right? So newspapers are trying to maximize engagement. TV networks are trying to maximize engagement. And so, you know, when you look at certain TV networks or, or news sources, or, you know, online, some like whatever far, whatever blog, like, they’re just trying to say things that their audience is gonna respond to and share or engage with whatever. Another great Twitter follow was Carl Higley. I’ve never actually met him, but I think he’s the best follow in rhexis world on Twitter. So hit him up. He was just ranted about this, like this week, I think, and saying that like maximizing engagement isn’t inherently wrong. Everybody like the, the, the, the newspapers publishing the story about how algorithms maximizing engagement caused all these problems. Those newspapers are also bacteria by writing

Ronak: that. Absolutely. Absolutely.

Evan: So it’s really hard to figure out where, where. The solution is here and I’m not saying at all, but that’s not a problem. You know, if anything, okay. So an algorithm used to talk about a lot about the filter bubble. So kind of like what I said earlier, if you want to bring in other data sources to break someone out of, you know, if you only ever listened to led Zeppelin on Spotify, they don’t only show you led Zepplin because they’ve, they’ve gotten good enough at these algorithms to find the other things that you might like. But with the very basic algorithms that is really hard to, to get you out of that filter bubble and only showing you the things that you’ve already sort of engaged with or listens to. So an algorithm that only ever shows you one type of thing is probably not maximizing engagement enough, cause it could probably show you other things. So the filter bubble and the information bubble, I think slightly different columns or at least they have definitely different solutions. I guess, if anything, I would say I wish. I wish that media and news weren’t so intertwined. I think that’s one of the problems is that like news and information probably shouldn’t be the same sources as entertainment because that incentivizes the news producers to be entertaining or at least on the same platform within, you know, I’m sure we’re all thinking of the same platform where people go for entertainment and ended up getting radicalized. Not great. I also think that, you know, one of the things that comes up on these and I, I’m no expert on these, this sort of thing, but free most beaches though, is the big rally like, Oh, you know, these platforms are hurting my friends because freedom of speech is not freedom of distribution. There’s no rule that says YouTube passed to promote either radical stuff, as much as they promote, you know, whatever your favorite YouTube channel is. So I think as these platforms, I think. If I was working on them, I would want to aggressively aggressively deprioritize any sort of hate speech, any sort of racism, any of those sorts of things. Cause there’s no, there’s no, there’s no law that you have to promote those things. You don’t, you don’t, you don’t have to guarantee those people apply interesting leg.

Ronak: A lot of recommendation systems are any of these algorithms to improve engagement are in one way for. Information discovery where it’s like, Hey, you have way too much stuff you can not go through. So let us suggest what you can go through to be on the platform. And I mean, there was a show on Netflix, which we don’t want to talk about, but it is like portrays all the social media companies as evil. And I feel that it’s giving tech too much credit. I don’t think we thought this through that. Hey, in the next 10 years, when we have all these amazing platforms with internet being ubiquitous in the entire world, we’ll have these control over like society in general. Like we can control opinions. We can show them the world with which you want them to see. And I feel it’s too much powers too, in the hands of engineers who are writing code on every single day without realizing the long-term impact of what they end up creating. If nothing else, at least I hope us, I mean, the tech industry just. I think this conversation is happening, which is a good thing, but we realize and recognize that, Hey, this is a problem. And we, if nothing else, we at least need to be conscious about what we are building. And just optimizing for engagement is not the only metric that’s going to help us grow.

Guang: Yeah. And I think that sort of a side note, too, that to me, I feel like the contrast. I really like how you put it in, which is that, you know, in the, in the tech world, like they have the same objective functions. Right. Which is to say maximize say retention or something like click through rate or whatever. But then in the old world, like without tech, I think there’s a lot more sort of. Qualitative checks along the way where like, Oh yeah, like maybe this is an idea I have, I’m going to implement. And then, but then as I’m doing it, I realized, Oh, because he’s a person at the end of the day, that’s doing that. Right. Versus in the computer world, what has you very specifically, or specifically you say what your objective is, you kind of, unless you specifically add also like write checks and guard guardrails, otherwise you would just literally like do what you tell it to do, which is to do that. So there’s, I feel like that aspect as well. Well, yeah,

Evan: and that was, that was something I was trying to part of that, or one extreme of that attitude is something I was trying to fight against with the talk that I used to give about societal bias in algorithms, which was, you know, so the example we used it meet up and someone I was very proud of was we made sure our algorithms didn’t combine. Explicitly combined like gender and interests. The sort of key example for us was coding meetups. So if you were to look at coding meetups on, on meetup, yes. A higher percentage of men were interested in coding than, than women, but we didn’t believe that we should let that impact who we showed coding meetups to. And we didn’t, we didn’t, we just, we decided as a company, we didn’t, we didn’t want to take, we didn’t want to look at gender that way. And so we took a stand. And so we, we, we used that idea, that value that we held to, to choose what our algorithms did and what I definitely can’t abide are the engineers that say, Oh, well, you know, that’s, that’s, that’s suboptimal, wasn’t it suboptimal to not show Cody municipal light. Maybe, but like, it probably, it mattered more to us to do what we did. And so I think the same thing is what I would like to see. If, if, if I was working on those platforms, at least it’s hard to, hard to know all the factors, but I would want them to take a stand and say, actually we don’t, we don’t let this kind of hate speech or this kind of ideas propagate on our platform. And so, you know, we, we bake it into our algorithms to not do so.

Ronak: Thanks for sharing that stance. I don’t think it’s, it’s easy to take that stance openly. So we appreciate that. So going back to a one thing. So I was recently in India, I was meeting my family and my mom has started using some of these social media systems and she was like, Hey, it’s amazing. I actually recently became friends with a few of our relatives and she saw other people being recommended to her. And she was like, that is pretty cool. How is it doing that? My family’s not into tech. So how do you explain. Well, let’s just say a recommendation system to someone who is not in tech, like explain it to me as if I have no idea of, or I’m a tenured or no, I’m just trying to figure out, like, how can I talk to my family or use some vocabulary that is because if I go and say, instead of Kevin nations system, they’re like, well, what’s the recommendation system. And it’s like, it takes too much words to explain. And I’m like, Hmm, there is a better, there should be a better way of doing it.

Evan: I think the, I think where it gets really tough is when somebody says like, Oh, is my phone listening to me now? And now I keep saying, I keep seeing wallet apps. And I’m like, I don’t know if it’s listening to you. I

Ronak: hope that, so this happened like again, this, this happened recently in India. Some of my friends hired an assistant in their home and they interact with the through boys and they’re like, Oh, is this listening? I see this ad on Instagram. And I’m like, I don’t think it works that way. But

Evan: yeah, anyway. Okay. As far as how to define them. A recommender system. Yeah. I mean, I think the easiest way is probably just to not just tell them where to filter, you know, they’re just this way is that, is that sort of like, Oh, they, they know that you’re friends with so-and-so or like, you know, you’re friends with five people that are also friends with this person. So that’s how they, that’s how they can guess that you’re, that you know them. Now the truth is it gets probably a lot more complicated than that. Maybe, maybe they have access to your contacts in your phone because you installed the app and they know that you are, you have a contact for that person already. Like the number of factors that could be weighed in are, are a lot. And honestly not everybody knows all those factors and might feel a little. You know, the companies that are building things don’t necessarily want everybody to be completely aware of all this. And so they’d have to explain all that. And part of it’s just hard. And for the most part, we get used to these things as we go along, like people used to freak out that they looked at at at, you know, look at some shoes and then a week later saw an ad for those shoes. We’ve all accepted that by now. Right. Everybody has some sense of like, Oh, was cookies or whatever. Right. But yeah, it’s, it’s hard to know what the right line is. And I, I hope, I think we’re getting better and better at transparency and as users understand more and more what, where they want to draw that line. And and so, yeah, I, I hope we’re moving in a good direction.

Ronak: Makes sense. So we starting to wrap up and I have one question that’s probably a little more personal as well for folks who don’t work on ML yet, but are interested in dipping their toes into it. Do you have any advice.

Evan: I love this question. So I’m one hand, there’s tons of ways to go and get exposed to ML and learn some of the techniques and do practice problems. And there’s, you know, there’s, there’s cattle, there’s so many great ways to go and learn machine learning now. And I think that’s all awesome. If you’re hoping to do it at work, if you want to do it in prod at your company, I think the coolest sort of route or the most effective route is to come at it from the problem side. So, you know something about a problem that you have on that your team is trying to solve, whether it’s a product problem, whether it’s an infrastructure problem, any of these such events, you know, more about the problem than, than somebody else is going to, maybe, maybe the, maybe you have a machine learning team already. Maybe you don’t even have anybody at your company that does machine learning already. So you know, thinking about solving the problem first and using your knowledge of the problem. Is, is going to be one of the best sort of angles into it and, you know, try not machine learning first. That’s the key that’s I tell that to every machine learning engineer, what, what’s the first thing we could do. That’s not machine learning. So try those things first. And then at some point maybe, you know, there’s a reason machine learning exists in is so popular is because it can solve problems that are hard to solve without machine learning. So, you know, I, I would say do that first and use your knowledge of the problem. If you do happen to like go to your machine learning team or something like that, and you’re going to want to want to maybe code code, deploy a machine learning algorithm for the problem that you’re trying to solve. I would say one of the, one of the big things that most machine learning folks have is a lot of, a lot of scar tissue around getting burned by machine learning algorithms. You know, one of the things that you learn is. Oh, this algorithm has 99% accuracy. Awesome. I cracked it and it’s like, no, you didn’t, you broke it. You put, you put data in that. It wasn’t supposed to know and constructing the problem of how you ask a machine learning questions and really hard thing to do. And this is, so this is the worst thing you could do is go to your ML team and say, Hey, I wrote this algorithm and it has perfect performance for my problem. Can you help me deploy it? And they’re like, first of all, I like making the algorithms not deploying them. So don’t come in the house, close your algorithm. It does not have 99.9, nine, 9% accuracy because if it did probably wouldn’t be worth doing with machine learning or it’s just broken. And so, you know, developing that, but it’s scar tissue, a lot of skepticism about algorithms, figuring out ways the better you can understand your problem. If you, if you understand your problem well enough to really know. How to test the machine learning algorithm and see if the machine learning algorithm is working or not. That’s a really good sign that you’re thinking about your problem deep enough to go in and apply machine learning to it. So like, like my, my concert front and keep that, keep the problem in mind. Think about what you’re trying to do, not just how you want to do it,

Ronak: that advice, if nothing else, I’m going to send us that kit, try something. That’s not machine learning first to a lot of people. Thanks for sharing that. And I can say it’s coming from an expert. I am not the one saying well, so we have a question which we ask everyone towards the end. It’s. What was the last tool you discovered?

Evan: I’m I, I certainly didn’t. I certainly didn’t discover it. I guess it’s probably probably my, my, the answers that I needed to give to this are things that have existed for a very long time, but I’m like finally getting a little bit, I can finally write an command. And get it right the first time. Like every once in awhile, I mean only simple stuff. Only like print column. Don’t nothing, nothing crazy. But that feels really good. I feel like that’s a, that’s a bit of a power that I’ve been learning. I also really like I think it’s called process substitution where you do like angle bracket prophecy, and then you can put in a a whole command and it takes the output of that command and treats that as if a document is like having a named pipe. So it’s sometimes it’s just a lot easier to construct. I feel I always struggled using XR gigs. And so for certain types of things, I find it way easier to just treat it as if I had a file on disc, but I don’t want to write the file somewhere. I just want to like. Take this file and then, you know, put it through sort first, but then treat it as a file in the command. So that’s what I moved on. I’m going to go some real command line jockeys.

Guang: This is really trying to impress our audience. Yeah. We

Ronak: love bash. It’s the best. Anything else you would like to share with our audience Evans?

Evan: I guess I would say to people trying to start their careers, this is just one of my favorite things. I like to tell people that you can’t go wrong working with people that you like on things that like that you like, but you especially can’t go wrong working on things that you believe in and things that matter. And so if I could, if I could have any influence on the tech landscape, I would say, go work on, go work on problems that matter things that you believe in, it’s going to be good for your career. It’s gonna be good for your, your, your feeling about how you do things. And it’s gonna be good for the world. Oh,

Ronak: that’s good advice. Thank you so much for taking the time. I mean, this was awesome.

Guang: Thank you so much. Yeah.

Evan: Thanks for having me.

Listen on

Apple | Google | Spotify | YouTube | Stitcher | Overcast | Castro | Pocket Casts | Breaker

Previous