Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit. Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.
Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit.
Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.
"But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications."
Redis creator Salvatore Sanfilippo's approach provides a lesson in how to contribute to open source, which Olson recounted in our interview.
Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project.
It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help.
"One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project."
What's it like now working at AWS on open source projects?
Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be.
To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.
Colleen Coll 0:08
Welcome to this special edition of the new stack makers on the road. We're here in Q con North America and discussions from the show floor with technologists giving you their expertise and insights to help you with your everyday work. Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform offering over 175 fully featured services from datacenters. Globally, millions of customers trust AWS to power their infrastructure become more agile, and lower costs.
Alex Williams 0:45
Everyone were on the show floor at cube con doing a few interviews. And my first guest is Madeline Olson, software engineer at Amazon Web Services. How're you doing?
Madelyn Olson 0:57
I'm doing great. It's exciting to be here.
Alex Williams 0:59
Good. I'm gonna just try to pull up my notes here. Because I did have some questions that I want to make sure that I get to him first, though. Tell me about Madeleine, like, what is your background? What is your focus kind of here at an event like this? What kind of technologies are you working on? I know you were gonna read us. But I think it's a little more broader than that relate to the cloud native applications? Yeah,
Madelyn Olson 1:21
sure. So my main job at the moment is I am a Principal Engineer at Amazon Web Services, and I'm also a maintainer of the reddest open source projects. And I'm here today to talk more about how Redis can be used in the broader cloud native ecosystem. A lot of people think of redis as just the cache, I'm hoping to show how there are various other ways that Redis fits into many service oriented architectures and helps simplify the deployment and development of modern applications.
Alex Williams 1:50
So tell us a few ways that Redis does fit. And will those be discussions you'll have here?
Madelyn Olson 1:57
Sure. So as I said, the main use case for Redis as a cache, so people have a primary back end database or some other work load that takes a long time to run. And so they store the intermediate results in Redis. Okay, so you can get with lower latency and higher throughputs. So the whole idea here is that Redis is often deployed in a non durable way, okay. But there are plenty of other ways you can use Redis. And that non durable way, one common way is what I like to call a data projection API. So you basically take a bunch of different sources of data, maybe a, you know, a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. And if the data is not there, you know, you throw fault to customers, this is a really great use case for low latency applications. Wherever the data is not there, it's sort of the same as you know, maybe a timeout. Another great use case I want to talk about, which is specific to the talk I will be doing later, is basically as a Message Broker. So most people want a durable Message Broker. But there's a lot of great use cases for an ephemeral Message Broker. And again, it really ties back to the whole, like, low latency aspect of Redis. So if you're building an application that if an event is late, it's basically the same as if it was lost altogether, which is the perfect use case for Redis. So I also want to highlight the fact that rest can also be used a durable configuration, which is a little bit less common, most people don't think of Redis. That way, it makes very non ideal trade offs for rice law, building of rice has low latency and durability slows it down a bit. But rice can then still provide very low latency reads. So the right path gets slowed down kind of to a similar throughput and latency as a traditional database. But you get really low latency reads. So it's still great for that Message Broker use case I talked about a second ago, where you basically want to have all your applications talking together by passing messages to each other. And you can use Redis for this use case. And so that's also part of the talk I'll be doing later this conference as well.
Alex Williams 3:47
Low Latency is a topic of interest. among a lot of developers, what are some of the hurdles you're seeing people have with low latency applications,
Madelyn Olson 3:57
two big hurdles I've seen in both world work and AWS, which we care a lot about low latency, as well as within the Redis community is that it's very easy to build sort of an unstable application, you know, it's really hard to get all the configurations, right. Like if anything goes wrong in your application, all of a sudden, latency starts spiking up either because a node dies, or maybe some other part of your application slows down a little bit, you start seeing cascading failures kind of throughout the system. So what's really hard to do, right, but really important is building data resiliency and making sure that you know, even if something fails, the rest application continues working. So that's something I'll talk a little bit in my talk and something that I really think you know, any of us as a whole can bring to the conversation because we you know, are the largest cloud IT vendor. And we really learned a lot of these lessons about how to make sure applications stay running and stay resilient.
Alex Williams 4:45
Okay. So you're pretty deeply involved in the Redis community. How did you get involved what advice would you give to people who are new to open source projects or maybe even are been in existing open source projects but are Starting to branch out a little bit, there's this whole kind of talk about chop wood carry water, for example.
Madelyn Olson 5:05
Yeah, so I joined sort of an interesting project. So I joined a project that was originally led by a single individual, his name was Salvatore Sanfilippo. And he was the original creator of Redis. And he was the only maintainer, he's the only one with write permissions to the project. And so that type of project really requires a lot of engagement to to get this person responding to you, right, Salvatore, you know, how his vision of the project, and he really liked focusing on the feature part. And so the main role that I played sort of getting into it was I was doing, as you said, I was chopping wood and carrying water, helping, you know, fix small bugs, improve documentation, fixing small issues, creating reproductions for issues, and that really helped the maintainer sort of upscale himself a little bit, and it helped me get involved in the project, a lot of the work I was doing was based on my experience at AWS. So I had a little bit of like a freebie to get into the project, because the service I worked for was was extensively using Redis. But I always recommend that if someone if you're using an open source project, like that's usually the best place to get involved, right? It's very daunting to get into open source development work. It's very easy for people to like, see these, you know, open source developers as like, they know a lot more than you and they're kind of afraid to open issues. But if you have a real use case, and like you have an issue like just opening issues, documenting reproductions, and if you have a bug, just like trying to fix it, most open source maintainer is are really willing to help, you know, help you learn help kind of give you the right nudge so that you can help fix the problem. One big problem throughout open sources, they're usually resource constrained, right? open sources, oftentimes a lot of volunteers, so they're usually very willing to get more people to help with the project,
Alex Williams 6:41
I read this blog post by Madison. And I really found it interesting about your work in the Redis community. And the focus of the story was about the PR you made for Transport Layer Security. I love to hear that story. Because it talks a lot about how you, essentially through your contacts and understanding and hard work, provided you with every role in the community that has, I think, done a lot for both you and for the people who are part of it.
Madelyn Olson 7:15
Yeah, so the context here is I was involved in helping transport layer security to our managed service. And what is that just for people with my elastic cache and memory to be for Amazon. So at those two services, we had a transport layer security TLS. And it was built sort of natively into our service. And it was a really common ask from the community as well, that as mentioned, the previous maintainer of rest of the time, Salvatore thought it was too complex to really add into the project. And so he had, it had been a request, you know, long before we implemented it. And he had always push back kind of on that complexity parts. And so I actually just tried to take our implementation and just create a PR and just throw it into the committee and be like, Hey, here's our implementation, you know, will you accept it? And the answer was, No, it's still too complex. And I think a lot of places and a lot of companies will kind of like, well, we did our job, we have a pull request, you know, we've put into the into the live, let's, let's call that done, but I think we could have done better. So what I did is we actually went to the Renaissance conference annual conference Rascoff. And we, you know, I talked to Salvador and try to, you know, walk through, like, why this is the right solution. And he sort of talked through, like, oh, here are all these other stuff that I know, I think would be a better solution. But I don't have time because I don't think it's important. I was like, that's fine. Let me let me do all that work. Let me prototype all this, let me show that our solution is still the best one. So I think one of the most important things I did at that point was like create a Slack channel for Redis developers. Like it's a little thing, but it's very useful. It's so important, because at that point, we had all we had was GitHub. Yeah. And you know, it's not a great channel for synchronous communication. And from there, we set up a monthly thing to be like, hey, like, here are the big things that are working on, here's how we want to, you know, here's what we learned. And from there, I took what he wanted to do his prototype. And I built it all out and show that the main problem with his it was it was much less performance than the original one. But and so he's like, okay, fine, let's go back to the original implementation. And at that point, someone else that joined the community development channel was like, Hey, I actually have a slight tweak of the original implementation, that actually kind of solves a lot of the problems Salvatore was concerned about. So at that point, we were like, okay, so we sort of tweaked the original implementation to kind of be a little bit more abstract, have a like a layer of abstraction away, sort of, like all of the networking handling. And Salvatore was like, Okay, this is probably good enough, and he got accepted. And then, you know, came on read a six, it's kind of the story and it's like, it really emphasizes that there's a lot more to do to get, you know, major changes in projects besides just writing the code. It's you know, building consensus mail, making sure we have community making sure everyone's kind of like engaged and like getting a diverse set of viewpoints, which will help solve the problem.
Alex Williams 9:49
What did you learn about yourself through that process? And who did you go to to find perspective and such the thing
Madelyn Olson 9:57
I learned about myself, which I guess was less common and develop Reason I thought is I'm very patient. So the whole process this PR took almost two years. And the whole time, I was kind of just, you know, plugging along when I had time I would work on it. And you know, throughout the rest of me, a lot of people were just kind of like, and it's not worth it. But sometimes things in our like fast moving industry, like a lot of people want things to be done now. And like all times like to get something really done, you have to be slow, you have to be methodical, and you have to sort of slow down and make it happen.
Alex Williams 10:25
So is that helped with continuing complexity that you're facing with Redis? What are some of those continuing complexities that you're facing? Now? You mentioned low latency in the Redis. Community? Is that a topic of interest?
Madelyn Olson 10:36
Yeah, so so we actually had a change of governance. So Salvatore actually stepped down two years ago, and we have a new governing body, of which I am now a part of and that that body sort of came about from this, you know, Slack group that we created. And this new group really instills those ideas of like, there's a lot of hard problems that rest needs to solve, right? This was built really for like the 2010s. Right, it was focused on caching, it was focused on like, the burgeoning web. And it was never really built for cloud native development, right? Like, how do you build Redis, effectively into a ecosystem? How do you really get Redis to be stable and like maintain replication Redis was really built in this world where an operator would come in and individually tune every node. Whereas nowadays, a lot of people expect to deploy it through, you know, pods and Kubernetes. And like, just be able to scale out their deployments. So to have multiple nodes and they to just be configured automatically. So those types of problems were really is kind of in trying to solve now, because a lot of people one issue I've seen, I've seen personally a lot is res has this deployment called cluster mode, which allows sharding of data. And we've seen a lot of issues where people just don't get it right, and it ends up just completely breaking. And if your cache fails in your application, your whole application can go down. This is common when you know, you gotta cache and if the end you you don't scale your backend database enough to withstand all the traffic. So if the cache goes down, the database just like, gets overwhelmed in, rounds out. So really making Redis a hardened production ready system is, I think, a big problem that the community faces today. Also, this continuing pressure that Redis architecture was built 10 years ago. So a lot of development has happened in both, you know, CPU architecture, kernel architecture, there's a lot of better ways that we can build Redis to be more scalable, higher throughput, more efficient per core. And we see some other databases starting to innovate in this area and building new technologies. And we really need to make sure that we're staying on the cutting edge here, because Redis, as I said, it's high throughput, low latency. So there's other faster databases that do the same thing. People will start using those,
Alex Williams 12:34
how long have you been at AWS, I believe, seven and a half years, how would you compare AWS seven and a half years ago, two, three years ago, or two today in terms of the open source focus.
Madelyn Olson 12:50
So it's fundamentally different from when I joined back in 2015, when I joined open source like wasn't even on our radar, we inherited a lot from this belief that in order to best serve our customers, we retain all of our IP, and we build like sort of the best cloud services in the world. A couple years ago, it started to become more apparent that customers don't like building against proprietary API's, they like building gets industry standard open source. And that's why Ron is really picked off. And that's why the managed elastic cache service I worked on really succeeded and did very well. But still, at the time, we were very focused on Oh, well, we'll just build a proprietary API like extensions on top of it and keep it all to ourselves. And that was sort of our thinking, maybe in like 2018 2019. And then around that time, there was a pretty big paradigm shift, where we started to realize that well, what really differentiates us is we're managing the application on behalf of customers. So our differentiation is really in our control plane in our compliance guarantees and our security, our best practices, and not so much in the fact that there's a open source database there. Like we really want customers to use these open source databases to do dev test, and then sort of once they've productionize, to start using our managed services. So at that point, it became kind of very clear that it's actually in our best interest of our customers to just push everything open source. I think the best example, when I started contributing to Redis, back in 2018, every single contribution I made I had to go through a lawyer, I had to make a ticket in our internal ticketing system to sort of tell people, you know, what we're doing, get approval for it. And now it's just like almost the opposite, where it's like, Hey, if you want to keep something internal, like you have to justify it or not, we don't we don't need that internal like differentiation. We want we want open source Redis to be like, the important part and us to just be the managed service on top of it.
Alex Williams 14:33
Madeline, thank you so much for taking the time to talk today. I've learned so much about Redis and its background and how it's changing and adapting and your role in the community. So thank you.
Madelyn Olson 14:44
Yeah, it was delightful talking to you, Alex.
Colleen Coll 14:46
Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from datacenters globally, millions of customers Trust AWS to power their infrastructure become more agile and lower costs.
Alex Williams 15:05
Thanks for listening. If you'd like to show, please rate and review us on Apple podcast Spotify, or wherever you get your podcasts. That's one of the best ways you can help us grow this community and we really appreciate your feedback. You can find the full video version of this episode on YouTube. Search for the new stack and don't forget to subscribe so you never miss any new videos. Thanks for joining us and see you soon.
Transcribed by https://otter.ai