Description
In this episode, we about what are some fundamentals of machine learning and AI, with Oscar Beijbom, co-founder of Nyckel.
Show Notes
Transcript
[00:00:05] SY: Welcome to the CodeNewbie Podcast where we talk to people on their coding journey in hopes of helping you on yours. I’m your host, Saron, and today, we’re talking about some of the fundamentals of machine learning and AI with Oscar Beijbom, co-founder of Nyckel.
[00:00:19] OB: You know, now you’re in the business of detecting when your model starts performing worse and pulling in new data that is represented of the new distribution and training on that.
[00:00:30] SY: Oscar talks about the difference between machine learning and AI, what it traditionally takes to integrate machine learning into your code and how using a tool like Nyckel can help you in your learning journey after this.
[MUSIC BREAK]
[00:00:54] SY: Thank you so much for being here.
[00:00:55] OB: Thanks for having me.
[00:00:56] SY: So before we get into machine learning and AI and all the stuff you’re doing at Nyckel, tell us about how you first started to code.
[00:01:04] OB: Yeah. I mean, I’m old enough where we were like the first people on the block to have a computer at all. Like my dad’s brother worked at IBM or something. So he got us a cheaper computer. So this was really early as BASIC. The programming language was called BASIC. Basically, what you can do is write something like a little prompt and then have the player answer a question yes or no. So it was sort of like role-playing games where you sort of navigate some sort of, well, the game designer had developed.
[00:01:36] SY: So I know that you have a lot of traditional schooling in computer science and engineering under your belt. Can you talk about that part of your life?
[00:01:44] OB: Yeah. I mean, I don’t really consider myself a programmer still. I mean, I’ve done it 24/7, I would say, in the last 10 years or so.
[00:01:53] SY: Yeah.
[00:01:54] OB: But I mean, my background is in engineering physics. That’s still what I consider my core expertise, my background. This is stuff like continuous math, calculus, that kind of stuff, and then all the physics that go with it, the dynamic systems, control theory, and those kinds of things. So I did that as my master’s. And then we had on our math department like applied math. They were doing an optional focus on computer vision, basically machine learning. I took that and decided to focus on that. At the time, believe it or not, it was actually considered a very bad career choice.
[00:02:29] SY: Oh yeah?
[00:02:30] OB: Yeah. Back then in Sweden, there was not a single job that we had in computer vision and machine learning, because it just didn’t work. There’s almost nothing that worked. Of course, I think in the US, there are probably some companies that play around with it, but not in Sweden. But I did it anyway because it just seems so exciting. It was truly a passion choice where it’s like, “Wow! I can somehow get this computer to understand what’s in an image by having it learn what’s in an image.” It seemed like such a powerful thing that I wanted to get into it.
[00:03:00] SY: And what was it that made you love it so much that you went through what sounds like a very rigorous academic career, getting your master’s and you’re doing all that math and engineering? What was it, despite the lack of jobs and it being considered, maybe not a great career option? What kind of kept you there?
[00:03:20] OB: Yeah. I mean, it’s a good question. And I think I got a little bit lucky. I mean, I work hard, right? So maybe you make your own luck, but I think even as I was completing my master’s, in Sweden, you do this six-month internship, basically at a company to sort of prove your skills in a more practical setting. And the company there, they had me do a machine learning, applied machine learning experiment to do autofocus in this microscopy setting. So we just collected data, old school extracted features, and then ran like this perfect machine. It was the algorithm of choice back then. My academic advisor’s like, “Yeah, sure. Go do the project, but it’s never going to work. Don’t expect it to work.” And also the guys at the company, they were like, “Yeah, we realized this is pretty far out there, but let’s just see what happens.” And lo and behold, it actually worked really well. So it was such an encouragement. They took a path. It went into production. So that was a really great start, feeling that, okay, giving me confidence that if you just pay attention and push through, you can actually get machine learning to work in surprising settings. And then my first job out of college was at a startup in Sweden that we’re building a… it was a crazy idea. It’s an airbag bicycle helmet called Hovding. So what this thing is, is that imagine you put on a collar around your neck, right? Then you zip it up. And then the collar is equipped with an airbag that is sort of rolled up into the collar. And it also has a little microcomputer and like an AI algorithm. But what it does is that if you happen to be in a bike accident, as you’re flying through the air, like as you’re experiencing it, the AI will realize that you’re in a bike accident and inflate the airbags around your head. It’s like a hood that inflates around your whole head.
[00:05:06] SY: Interesting.
[00:05:06] OB: So by the time you hit the ground, you’re actually fully protected, and actually better protected than a standard helmet. So the founders were both non-technical. They’re designers that have come up with this idea and got the money. So I was the first engineer on site. So I have to assign the computer hardware, the sensors, and then the actual machine learning algorithm, the data collection and so on. And we actually got it to work where this is now in production and ship across the world. On application, this is too good to be true. How can you build an embedded AI system that reacts that quickly and in such a safety critical thing? Right? You don’t want it to fail. In either direction, you don’t want false positives, you don’t want false negatives.
[00:05:51] SY: So tell me about your career trajectory. After you did all this work, where did you go from there and how did you end up in Nyckel?
[00:06:01] OB: I did Hovding for two or three years and then I still felt like I wanted to learn more. I still felt like this field of machine learning there’s so much to it and I’ve only done it a few years and then back, more schooling. So I applied for a PhD program in the US in San Diego and I was accepted. It took me two tries actually. And so then I spent six years in school there, and that was a computer science PhD. I have to go back and learn all the computer science basics that I hadn’t learned because I was doing engineering physics in Sweden. But of course, there was an amazing machine learning talent there among the faculty. So I really learned, like fleshed out everything, like all the different concepts, all the different ways to think about it. It’s a very big space. And then also during my PhD, I built this thing called CoralNet. It’s a site I developed during my PhD and it’s actually used today by many big government agencies to do monitoring of coral reefs and by researchers.
[00:06:57] SY: That’s cool.
[00:06:59] OB: So the idea was you take a picture of the coral reef and then you could train your own sort of custom AI to classify what’s on the bottom. Like is it sand? Is it coral? Is it algae and so on? So you can quantify what’s down there and extrapolate your ecological trends and biological state through the system. And then what happened, as I was maybe halfway through my PhD, the big breakthrough happened in deep learning. I went to Berkeley and did research there. It was such a completely new thing. Right? Then I wanted to wrap my head around it. And then I went to the self-driving car industry. So for the last five years, I was actually leading a big team. We ended up being like around a hundred people in the end to develop the AI to drive a car, which is arguably one of the hardest AI problems of today. It’s like big robot navigating an open-ended environment with big mass is very safety critical. Like I said, we had in the end about a hundred people and they were on things like finding the right data, finding the right model, architectures, building all the infrastructure you need to do everything for data mining, for training, for deployment. And then we had a team focused on basically the metrics and how do we evaluate these things. So while I was immersed in that world, a friend of mine, he’s a developer and used to be one of the leads at Square for their developer organization. He had a side project where he needed some “very simple”, but content curation AI. So he wanted some way to say, “Okay.” And the use of contributed content and he wanted some way to say this content is good or this content is bad. And he was like, “Why, as a developer, like a generalist software developer, why is it so hard for me still to add this relatively simple classifier, in this case, it was a text classifier to my application?” And I was like, “Yeah, you’re right. It shouldn’t be that hard.” And we started thinking about how can we put the cleanest, the simplest possible box or API abstraction around the whole machine learning complexity. And that’s basically what Nyckel is. So we have a UI that is pretty well developed, but it’s fundamentally in API where you can say, “Just post your own data. It’s all about the custom AI. So for your own modification, just give us some images or text or whatever you have and give us the output categories that you’re looking for. We’ll train them all for you and we’ll do it very quickly. We’ll do it in a few seconds and then we’ll deploy it immediately to an elastic infrastructure. So you can start hitting that invoke API with millions of invokes immediately.”
[00:09:36] SY: Very cool.
[00:09:36] OB: So that’s what it’s about. It’s like removing all the barriers, as many barriers as we can. I mean, it’s always going to be up to you to find your own data. Right? We can’t do that for you. You have to define your own categories that you care about. But we work really hard on abstracting away as much as possible. We call it sometimes the “machine learning anxiety” because even if you go and learn about machine learning or if you try to build this yourself, there are so many choices. There is an insane amount of choices, right? Libraries, algorithms, networks, hyper parameters, data splits, you name it. Right?
[00:10:11] SY: Wow! Yeah.
[00:10:11] OB: So we try to say, “You know what? Don’t worry about that. We’ll try it. Basically, we’ll try everything and we’ll give you what works best for you.”
[00:10:17] SY: So let’s dig into machine learning and AI as concepts. I feel like we use those words very interchangeably. I see a lot of machine learning/AI kind of gets lumped up into one category. And I wanted to know what is the difference? What is the difference between machine learning and AI?
[00:10:38] OB: So machine learning is, it’s a well-defined term. It’s a branch of statistics basically where you are modeling the input data and the output data in a way that when you see a new input data, you can infer the most likely out the label. So that is a very well-defined branch of research and engineering. Right? And that’s the term I typically use my whole career. AI is, on the other hand, not very well-defined. It’s a very wishy-washy term. I would say, if anything, it’s a broader term. So an AI just means anything that sort of resembles intelligence. And of course, the problem is what is intelligence? How do you define intelligence? It’s actually very difficult. But loosely, it’s like something that resembles intelligence. And of course, you could implement an AI using machine learning. So you could use some machine learning method to implement something that resembles intelligence. For example, all the advances that DeepMind is doing, you might’ve heard that in chess and Go, all these amazing breakthroughs. Arguably, that's artificial intelligence because it’s sort of like arguably you have to be intelligent to beat the world’s best Go player. It’s a narrow type of intelligence because that same computer can’t do anything else. It can’t…
[00:11:52] SY: Right. Right. Right.
[00:11:53] OB: But it’s intelligent in the narrow sense and that AI was trained using machine learning. Right? And you could also arguably build an AI, like the old chess engines, they weren’t powered by machine learning. They were powered by rules that someone had written down basically. If you’re in this position on the chess board, you should do this. Rules of thumb. So you can use rules of thumbs to build an AI. Arguably, it’s not going to be a very good AI. But just to kind of contrast, you can use rules of thumbs to build an AI or you could use machine learning to build an AI. That’s sort of how I think about it.
[00:12:32] SY: Okay. So let’s dig into that a little bit more. What is machine learning if it’s not rules? Right? I think as programmers, we’re pretty comfortable with rules, right? We say like, “If this and that,” it’s generally what coding boils down to, if we’re being honest. So with machine learning, if it’s not that, then what is it?
[00:12:52] OB: Okay. So that’s a great question. And of course, I would say once the model is trained, it actually becomes rules again. Right?
[00:12:59] SY: Okay.
[00:13:00] OB: It becomes like, “Okay, I take the input, I do some compute,” and then ultimately you have some sort of score and then you have to run the threshold. It’s either it’s this and that. Right?
[00:13:08] SY: Right. Right.
[00:13:08] OB: However, the difference is that the rules are learned. Right? The rules are learned from data. So instead of having a bunch of if-statements, I mean, there are literally machine learning methods called decision trees that are exactly just if-statements. But the thing is that they’re learned from data. So which question you ask and how you branch through that tree is sort of optimal in the sense based on historical data. But then, of course, now you use deep neural networks, but it’s still sort of the same thing that you can learn rules through data.
[00:13:44] SY: Okay. So what I’m hearing you say is if I’m doing just a regular script, just doing kind of everyday coding on my own, I’m writing those if-statements, like I’m the one determining if this click this button, get this page, like that kind of thing. Whereas with machine learning, it’s not really me figuring out the rules, it’s me training, and that’s where I think the idea of like kind of training with data comes in. It’s me giving my program a set of data and letting the data dictate what those rules are. And it’s not from me. It’s the data kind of making those decisions. So the output might be the same where we still have those conditionals, those decision trees, those if-l-statements, but the way I got there is a different process. Am I understanding that correctly?
[00:14:27] OB: Yeah. I think that’s the way to put it. And that is of course the appeal, right? You “don’t” have to write any code. Right?
[00:14:34] SY: That’s true. You make it sound so easy when you put it that way. I don’t have to write any code.
[00:14:39] OB: And I think moreover, it improves by itself. Right? Again, quote—unquote, where you feed it more data, you get a better system. Right?
[00:14:48] SY: Right.
[00:14:48] OB: So at that level, it’s extremely compelling technology.
[MUSIC BREAK]
[00:15:10] SY: So if I am new to this world and I want to start diving into machine learning and AI, what are some basics that I should know and maybe learn about in my journey? We kind of touched on this idea of training. We talked about datasets that we have to work with. What are some other kinds of basic, some other concepts I would come across if I was getting into this world as a beginner?
[00:15:38] OB: I mean, so you probably have to start by picking up Python because that’s the language that most of the deep learning frameworks use. So that’s a good sort of tangible start. And then once you know Python, I personally recommend a framework called PyTorch. It’s a fantastic piece of software. It’s the best deep learning software that’s out there. It’s called PyTorch because it used to be using a different programming language, but now they hooked it up to Python and did all the Python bindings. And it’s just beautiful. It’s very powerful, very descriptive. And once you know PyTorch, essentially, if you’re just doing a hobby project, that’s pretty much it. You can find some data. You can train any neural network you want. In PyTorch, you can just define the topology and then you can save that network and you can load it up again and do your predictions on new data. That’s like the very core. If you’re just a hobbyist, that’s an excellent place to start and you just start there. What happens when you take this to production though is it gets complicated in several ways. For example, when you train, right? So a lot of machine learning is because the theory of it is so weak, basically what it amounts to usually is just trying a lot of things. So any new problem, there’s a couple of deep neural network architectures that you would think might work, but even those architectures have a lot of different hyperparameters like, how do you train it? How do you fine tune it? How do you preprocess your data, for example? So basically, what ends up happening is that you need to try a lot of different things on your data to know which works best. And so to do that effectively, you basically have to build up a distributed computing cluster. Right? Some sort of GPU cluster. And for example, there is an open source community for this called Determined.AI.
[00:17:22] SY: Okay.
[00:17:22] OB: So they kind of solved that problem. I think it’s still pretty early, but in a way they let you provision GPU nodes on say AWS relatively easy, I would say, and then run experiments and then sort of read out your results. You can help figure out which one is the best model. That’s one set or complexity that you kind of have to figure out. And then once you have the model, you need to figure out how to deploy it. And then depending on if you’re deploying on a device or if you’re deploying on the cloud that has different complexity. For example, if you’re on a device you need to prune the networks, make it a little bit smaller, maybe contrast it to the use of different data types. So you get the very fast inference time that you might need for say a real-time application. So both of those things are more on the infrastructure side, but then you start getting into stuff like I would say more annoying issues like data drift. So let’s say the data you trained on those data you had lying around on your laptop and that data maybe isn’t exactly the candidate that you’re seeing in production. So say you’re training a spam classifier on some data you found on the internet, but then the kind of spam that you see on your site looks different, completely different characteristics. What happens unfortunately is that the machine learning algorithm you trained is not going to work as well. It’s something called a domain shift and it’s actually really annoying in machine learning. Most of the algorithms assume that data you trained on are drawn, so the technical one is drawn from the same distribution, the same statistical distribution of the data. And when that shifts, who knows? There’s very few guarantees on what your model is going to do. Now you’re in the business of detecting when your model starts performing worse and pulling in new data that is represented of the new distribution and training on that. So stuff like that. It gets annoying.
[00:19:09] SY: Yeah. Yeah. Yeah.
[00:19:12] OB: There’s a lot of little things.
[00:19:14] SY: So I know that you have, as you mentioned earlier, a mathematics background and you’ve been doing this for 20 years. You have a master’s and you’re very well trained and educated. Do you feel like you need all of that background to implement machine learning and AI into your code today?
[00:19:34] OB: Kind of.
[00:19:34] SY: Really? Okay.
[00:19:35] OB: I mean, I would say if you want to do it yourself, like if you’re the kind of person that wants to understand all the technology you’re working with, yeah, you kind of need… I mean, I have spent way too much time in school, but I would say you need at least a master’s in some of the related statistical topics and I think you also need to be a pretty solid coder to actually be able to pull it off. Or if you’re lucky and you’re hired into a company that has a good mentor, that can help you navigate some of those things. I think it’s a pretty long road and that’s basically why we started Nyckel. So Nyckel is not the only option. There are several companies like Nyckel that try to extract away as much as possible. And in fact, the machine learning is generally speaking is called their MLOps, like machine learning ops landscape is quite big and it’s quite messy. If you want to do it yourself, there are ways to, like I said, determine AI and obstruct the way to training clusters and then there’s something called Weights & Biases that obstruct away the monitoring of experiments. There are other services that kind of help you with the monitoring of models like the Model Drift Monitoring. So you can kind of piece it together like that if you want to or you can use a service like Nyckel where it's sort of the highest level of abstraction we say. Just give us your data. We promise to do our best and use all our experience to train the best possible model of getting your day. And then we also deploy this for you. So you’re just interacting with our API at a higher level. And I would say today, it’d be a little bit silly not to start there. Even if you want to do it yourself, I would say use something like Nyckel first because you’ll get up and running in like a day, right? With something that is likely going to be hard to beat. Of course, you can beat it. Right? I mean, we have, it’s called an AutoML System, right? It’s just going to find the best possible model for you, but that automatic system, as much as we love it, the work on it, it’s general, right? It serves all our customers. So of course, if you spend a year just working on your specific data and your specific problem, you’re probably going to do better, but I would say start with Nyckel or one of our competitors and then use that as a baseline and then try and improve it yourself later if you have time.
[00:21:57] SY: Yeah. I think that’s what I’m trying to figure out because I know that there are so many ML and AI either purely no-code solutions, low-code solutions or just really easy to use APIs. Right? I mean, you’re coding, you still need to know some Python and you still need to know how to work with datasets, but you don’t need a degree. Right? You don’t need to go back to school to kind of integrate that. Fast.AI is an example that comes to mind. I feel like there’s a bunch of others I’ve heard over the years that really lower that barrier to entry. And so I’m wondering, at what point does it make sense to use these kinds of off-the-shelf tools, these things that require a little bit of training? And I think Fastly has courses and like a little school that you can watch online and kind of wrap your mind around things if you’re totally new to integrate into your code. And in what situation would someone say, “This isn’t quite enough, I think I want to do more hardcore training”?
[00:23:04] OB: Yeah. So I mean, Fast.AI is actually, I would say, more of an education and training platform to help people that want to do it themselves. Actually, I would say it’s quite different from Nyckel. Nyckel is we obstruct away all the machine learning complexity and all the infrastructure complexity. Fast.AI is not a software as a service, right? It’s an education. It’s training material. And they also have some libraries that, I mentioned this PyTorch before, they have their own wrappers on PyTorch that makes it maybe a little bit easier.
[00:23:34]SY: That’s what I was thinking. Exactly. That’s what I was thinking. Yeah, they do have their own wrappers and stuff.
[00:23:37] OB: Yeah. For some they make it a little bit easier. I think they’re doing a great job and I recommend it for anyone who wants to learn the nuts and bolts and really do this themselves. So I would say, first of all, start using Nyckel. I mean, Google has this thing called Vertex AI that essentially does the same thing. We like to think they’re not as good. They run slower and clunkier on the API and so on, but just to mention one. Now there are alternatives in Nyckel. Right?
[00:24:03] SY: Sure. Sure.
[00:24:03] OB: But then once that’s up and running and you have time, then yeah, I think Fast.AI is an excellent place to start to learn and try to do it yourself.
[00:24:12] SY: I can’t remember exactly when this started, but it feels like many, many years, people have been saying that machine learning is everywhere and it’s going to be a part of all the code, all the businesses, all the software, and it’s been really talked about as this thing that’s kind of integral to everything you do, no matter what you do. I don’t know how true that is, but that’s definitely the message that I feel like I’ve heard over the years. And I’m wondering, if I’m a developer and I’m trying to integrate some element of machine learning into my code, how would I walk through that process? How would I kind of break that down into manageable steps and get started?
[00:24:54] OB: Right. So that’s a great question. And my co-founder wrote a blog post about this where he argued that machine learning is just another developer tool. You listen to some people, like some of the big sort of advocates, they make it sound like it’s the end of software. I don’t think so. Just like you’re choosing between using a regular expression or like some simple if-statements. At some point, you hit the limits of what you can write down explicitly and then you turn to machine learning for those functions. Right? So at Nyckel, we talk a lot about you training a function that you’re done calling inside your code, right? So you’re training a Nyckel function to do a specific piece. And I think going back to your question, it’s just like that. You have to understand the vocabulary of, “Well, what are the types of functions that you can bottle with machine learning?” Right? So for example, the most basic one is probably a binary classification where the input is text or an image and the output is either true or false. Right? That’s a binary classification. That’s one type of function. You can train it with machine learning, but then you can also train a function that the output is not binary. It’s one of many. Right? And then there’s something called multi-level where the output is actually many out of many, like it could be any number of things that are true. More like tagging where like many tags can be true. And then there’s more. The output can be a float, like a regression. We call it regression, but output is a float instead of a categorical value. And then there’s the detection in images wherein I used to talk about like spatial reasoning inside the image where stuff you see an object in this corner of the image, where you see three objects in an image. So I think it is important to understand what are the different functions that you can try and not kind of know the design space there, what choices you have, and then you have to sort of map your own whatever software or whatever thing you’re building. You have to think about, “Okay, which pieces here can I hand over to a machine learned function and which things should I just break out and make it a sort of a conventional module or class for?
[00:27:07] SY: Coming up next, Oscar talks about use cases where using Nyckel shines, as well as some of the limitations of Nyckel after this.
[MUSIC BREAK]
[00:27:27] SY: Tell me a little bit more about Nyckel and the kind of category of no code AI solutions in terms of what it can really do for a product, what kind of power it can give to an app? What are some examples of interesting use cases and applications of something like that?
[00:27:48] OB: I mean, it’s a no-code in the sense that when you train the model, you can use the UI, but it’s certainly not a no code to go into production. We basically just expose an API and you have to hit that API using whatever language of choice. We think of it more as a developer tool, but certainly the front end, the training part can be done is in the UI. The use case that we have really varies. So let me give you a couple. So there is a company called Garden that builds these indoor gardens. So it’s sort of a high-tech garden. We have these plants that grow up on a trellis and there are built-in lamps and heat and also like cameras and so on, on the irrigation system. So they are using Nyckel to figure out like given an image of this plant, is it healthier? Is it wilting? Does it have enough water? So that’s one example where it’s very accustomed to them. There’s not going to be like an off-the-shelf pacifier for that, but using Nyckel, they could just upload their own images and give their own examples, their own definitions of this is wilting and this is not wilting and that we train that for them, deploy it if they’re using it. Another cool use case is a company that does math quizzes. Apparently, I’m not an educator, but apparently when you do math quizzes, instead of doing multiple choices, you can have the students write down their answer, like the reasoning behind their answer.
[00:29:10] SY: Oh, interesting.
[00:29:11] OB: Yeah. So this is a company called Math ANEX that provides the service to schools, and then they also create the free formats to try and navigate a very rigorous rubric for how to make that fair. And they used to use humans to grade those quizzes until they came across Nyckel and I realized they can have a machine learned function or an AI to grade these quizzes for me. And when I first encountered it, I’m like, “Okay, that sounds a little bit too good to be true, but let’s try it.” And they did and it actually worked really well for them. So they saved like 90% of their manual annotation efforts can now be done using Nyckel instead.
[00:29:45] SY: And I think there’s also a use case that people used to help in the war in Ukraine. Is that right?
[00:29:51] OB: Yeah, that was actually pretty cool. So when the war started, and it goes self-serve, right? People just create an account and build a function themselves. We noticed this developer in Ukraine created a function and we reached out to him and then obviously gave him the service for free and stuff like that because we want to do what we can. But the problem he was trying to solve was, is building a site internally to replace people like internal refugees in Ukraine? So he had a very simple bulletin board where people could post like, “I can offer shelter or I need shelter.” And for some reason, people were clicking the wrong button. So they needed shelter, but they were clicking the “I offer shelter” button.
[00:30:30] SY: Oh, interesting.
[00:30:31] OB: The entries in his database got very messed up. So he trained a Nyckel function to just say, for this particular piece of text, this is an offer or is this a request for shelter?
[00:30:40] SY: Okay.
[00:30:40] OB: And that was also really cool because some of them are in Ukraine, some of them are in Russian, some are in English, like the actual offers, the request. But he was able to get rid of them in high accuracy and that helped him effectively run his humanitarian service. That’s the real point of pride. We were happy to be able to help in a very small way.
[00:31:02] SY: Tell me about some of the limitations of using Nyckel or something like Nyckel. What can’t you do? Where’s kind of the line for that?
[00:31:11] OB: Yeah. You are limited to the type of functions that we offer. So one example was that we just recently launched something called Nyckel Search. You create a library of images and they use search, that library using another image to find the most similar images. So that’s very common requests from people that run web shops. We have a couple of NFT customers actually that use this. And that was like a request from a customer that was like, “I don’t actually need to classify this image. I need to search and find similar images right in that database.” And that’s the kind of thing where if we don’t provide that particular function type, then you can’t use Nyckel. Obviously, we encourage everyone to contact us and talk to us because we have several beta programs going. We’re always interested in having new use cases, but that’s the most obvious limitation of Nyckel. If we don’t support it, you can’t do it. And then I would say the other part is as far as the accuracy goes, I think it’s a bit of a blessing and a curse, but right now, we don’t expose any of the machine learning dials. So we just say, “We’ll try everything we have and give you the best model.” And all you need to do is focus on your data. So make sure all the annotations are correct, and we help you find those, make sure you’re sending us data that is relevant. Like I said, with that domain shift, like data that is representative of your production environment, that is not like toy data you’re training on, that your label said it’s relatively balanced. So all these things. So we help them focus on their data because a lot of times that it actually is what matters to most. However, obviously once the data is as good as it can get, there’s been one or two examples where it’s still the Nyckel accuracy isn’t high enough for them and then they just have to go somewhere else or build it themselves.
[00:33:01] SY: So we mentioned Nyckel, of course, we mentioned Fast.AI, we mentioned PyTorch, any other favorite resources or tools when it comes to learning and doing machine learning and AI and kind of getting into that world?
[00:33:16] OB: I’m sort of on the fence here whether to recommend something called Jupyter Notebooks. Are you familiar with those?
[00:33:20] SY: Yeah. Yeah.
[00:33:22] OB: So I have a bit of a love-hate relationship with those because I think of them as candy, like it’s so easy to just open up a notebook and start coding away, but then after a while you just kind of get sick to your stomach because you just have a pile of code that is just a mess, right. You can execute the cells in any order. So it’s extremely easy that you find a bug in there. So back in my previous job, I had this rule that you’re never allowed to save a notebook. Every time you start from the fresh notebook. And the reason I say that is because what you should be doing, you can use as a prototype tool, write code in thread a little bit, but then whatever useful fragments of code or functions or classes you develop, put them in your actual library and commit them any IDE with as much type checking as possible with Python and whatever pre-commit hooks and tests. So that next time you sit down, you just import that class from your “real code”.
[00:34:23] SY: Yeah. I remember the first time I saw Jupyter Notebook. I was like, “Oh my God! This looks out of control,” because I’m just not used to looking at… I’m a Rails developer. I’m not used to looking at code in that way and kind of having each line run its own thing and then you have to keep it in a certain order. It was a very different way of looking at code and way of coding that I wasn’t used to. And I was like, “Oh, man, if this is my world, I can see myself getting lost very easily.” So I totally understand what you mean.
[00:34:57] OB: Yeah. It’s a real blessing and a curse. And the Fast.AI guy gets notebooks and he’s developed some way to actually convert notebooks into libraries. He does all his development in notebooks. There might be a way, and especially if you’re working with data, it is amazing, right?
[00:35:15] SY: It is fast. It’s really quick to iterate.
[00:35:19] OB: You can look at things and you can see the outputs and if you render them and look at the statistics and put up histograms and stuff like that very quickly. So yeah, it should probably be part of your toolbox if you want to get into this.
[00:35:36] SY: Now at the end of every episode, we ask our guests to fill in the blanks of some very important questions. Oscar, are you ready to fill in the blanks?
[00:35:44] OB: Sure. I would try.
[00:35:46] SY: Number one, worst advice I’ve ever received is?
[00:35:49] OB: Oh, yeah. I think I’ve gotten so much bad advice. I don’t have many.
[00:35:55] SY: You just rejected it? You tossed it all out?
[00:35:58] OB: I mean, there’s no such thing as bad advice. It’s just like it works for that person in their context. Right? So you always have to look at advice, does that apply to my context?
[00:36:07] SY: Yeah, very diplomatic. Okay. I like that. Number two, best advice I’ve ever received is?
[00:36:14] OB: This is maybe a little corny, but my dad, when I was 15 or something, got my first job and he said, “If someone gives you a broom, just sweep the shit out of the room.” Just work really hard. Like even if someone gives you the dumbest task, just own that, like work really hard to that and show that you can follow through, you’re capable, and then you get something more interesting to you next time. As far as career goes, I think that advice is actually pretty solid.
[00:36:42] SY: Yeah. I like that.
[00:36:43] OB: Just make yourself useful. Just don’t drop the ball on things and people will enjoy working with you.
[00:36:48] SY: Number three, my first coding project was about?
[00:36:51] OB: Yeah. So that was the game that I wrote in the BASIC programming language is a little like a question, answering thing where it’s just a problem saying, “Okay, you stand in front of a door. Do you open it or not?”
[00:37:03] SY: One thing I wish I knew when I first started to code is?
[00:37:06] OB: Yeah. I mean, that question is funny because I think it’s actually kind of nice to not know too much. If I know everything I know now I don’t think I would have even started because it’s so hard. I feel like every time I look back at my old code, I’m like, “Oh, this is terrible.” Like if I had known all the things I really can know today that I feel I need to know, I would have been so overwhelmed. Don’t worry about it. Just start and get right.
[00:37:34] SY: Well, thank you again so much for joining us, Oscar.
[00:37:37] OB: Cool. Thanks for having me.
[00:37:44] SY: This show is produced and mixed by Levi Sharpe. You can reach out to us on Twitter at CodeNewbies or send me an email, hello@codenewbie.org. For more info on the podcast, check out www.codenewbie.org/podcast. Thanks for listening. See you next week.
Thank you to these sponsors for supporting the show!