Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast. Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said.
Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast.
Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said.
“About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.”
Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now.
Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are.
“If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, Mongo is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?”
Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation.
“Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called t-digest, that people can use along and we've had support for Bloom filter, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.”
As a Redis customer, Alain Russell, CEO at Blackpepper, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition.
“We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.”
So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.”
Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey.
Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud.
“It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.”
Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.”
For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said.
Redis was run by project founder Salvatore Sanfilippo, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things.
“And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.”
Blackpepper was an open source Redis user initially, but they started a journey where they used Memcached to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said.
The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of JSON and RediSearch, and where Redis is headed in the future.
Alex Williams 0:08
You're listening to the new stack makers, a podcast made for people who develop, deploy and manage at scale software. For more conversations and articles go to the new stack dot I O. All right now on with the show.
Colleen Coll 0:28
Redis provides a competitive edge to any business by delivering open source and enterprise grade data platforms to power applications that drive real time experiences. At today's scale, developers rely on Redis to build performance, scalability, reliability, and security into their applications.
Darryl Taft 0:49
Hello to you. And welcome to another fantastic TNS makers podcast with the new stack. I'm Darryl Taft news editor of the new stack. And our goal is to continue to inform you about the hottest developer issues of the day. Today's episode covers databases. And our special guests are Tim Hall, Chief Product Officer at Redis is doing. And our other guest is Elaine Russell, CEO of black pepper, a Redis client.
Alain Russell 1:23
Hey, Darryl, nice to meet you.
Darryl Taft 1:24
Same year. So I'd like to get like a summary of what you guys do you know, what to read is offering and Elaine, What does black pepper all about?
Alain Russell 1:37
Yeah, we're ready to supply and pull it to uncover what readers does. But we were in he come agency to be based in New Zealand we service the New Zealand Australian market and work with predominantly traditional retailers and help them in their omni channel journey. So yeah, we've been around for 20 years. So quite old, an engineer's in a team of about 50 At the moment, but focused 100% on E commerce. That's that's essentially our focus.
Tim Hall 2:04
And then on the red side. Yeah. Redis is an in memory database where memory first database, which means data lands there, people are using us for both caching and persistence. These days, we have a number of flexible data models, one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to say, a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis. So whether it's a session, a sorted set, a hash, JSON documents, and more, we provide that flexibility. There's three modes, you can get Renison, we have an open source piece of software, download it, run on your laptop and read a stack is what we're recommending that developers start with today. And then there's Enterprise Edition that's available as software or in the cloud. And it's the most popular no SQL database on the planet six years running. People love it because of its simplicity, the ease at which developers can understand how to take advantage of it. It's scale. It's fast, super fast. And frankly, it's reliability. So that's that's that's reticent initial, what's the root of staff what's specific parses down a little bit. So Redis was actually launched about a decade ago, as a sort of standalone open source project. And over the years, as we've tried to extend and add additional capabilities to it, those ended up being constructed as sort of add on modules and other pieces. And, you know, if you go back and you look at the manifesto for Redis, it talks about things like a pouring complexity. And so to fulfill that for our developer, it felt awkward to say Go grab Redis open stores and vote. By the way, if you wanted to do these other things, then you also have to grab this piece, grab that piece, put it all together. So we did in March of this year. 2022 is we packaged up all of the extended capabilities that we've built over the years along with a companion app called Bredeson sight, which is a desktop developers tool for debugging and understanding how your application is working with Redis. And we packaged it into one easy to use bundle called rest stack. And so if you're a purist and open source purist and you've used Redis, for 10 years, I imagine you're probably still using the either the open source or you've upgraded to our commercial model. But if you're a developer getting started today, or if you're interested in taking advantage of some of those new capabilities, stacks your starting point
Darryl Taft 4:28
today, so most people know Redis as an open source cache, can you describe how Redis is evolved and where you're seeing where it's applied today? For instance, we were different. We got the story from cube con about how Redis is not just a cache, that can be used as a data projection API. I don't know if you saw that, but it got a lot of a lot of interest.
Tim Hall 4:50
Yeah, about 40% of our customers today are using us as a primary database technology. And that's that may surprise some people again, if you're a sort of classic Redis user Ernie and you knew us from in memory caching, you probably didn't realize we had a variety of mechanisms for persistence over the years. And so yeah, that's, that's the first thing is you can actually set up Redis. Now to store the data, and it does store it on disk, sort of behind the scenes while keeping a copy in memory. And so if there's any sort of failures and those sorts of things, we can recover the data off of disk and replay it into memory and get you back up and running. So that's, you know, that's a mechanism has been around for, I don't know, half a decade now. But, you know, it's kind of a long game to convince people of something new sometimes, particularly when, you know, they all have day jobs and are solving other problems. And, you know, Elaine, I don't know how frequently you're reading our blog, you know, I'm sure you're not reading it on a weekly or daily basis, you may have your account team helping you point out the new things that are coming. But, you know, so it's a little bit of a long game, Darrell, I have to say, in terms of continuing to reach out to developers and showing them what the latest capabilities are. But yeah, I mean, you know, one of the things that you see out of out of the top sort of 10 databases on the planet, right? If you would go to look at all of them, they've all moved into the multimodal category, in Redis is no different from that perspective. So if you look at Oracle was traditionally a relational database Mongo is traditionally a JSON document store only. And obviously, Redis is a key value store. We've all moved down 10 The field now Now, why would we do that? We're all looking to simplify the developers world, right? Why should you have a unique data platform for each type of data that you're working with, it creates a lot of complexity both for the developer to figure out which ones they're going to connect to, as well as the operator having to install run and maintain all these different pieces. And so each vendor is really trying to leverage, you know, what I would say, is their core differentiation and expand out from there. And the good news for Redis is speed is our core differentiation. And so why would you want a slow data platform? Right? You don't. So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called T digest, that people can use. And we've had support for bloom filter, which is a probabilistic data structure like all of these things, as we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal, right? It should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. Like I probably won't support that for for a long time. Right. But if you're just working with the basic data structures, you need to be able to query you need to be able to update your JSON document. Those straightforward use cases, we support very, very low, and we support them at speed and scale. Yeah, I think from
Alain Russell 7:47
from our perspective to this, the speed was certainly our starting point. So we we were a realist customers, and I can talk about how and because we've gone through the same transition, right? The we started off as a Redis as a cache, and helps us speed up traditional data that was slower than we wanted it. And then we went down a cloud path a couple of years ago. So for us, we've been down to cloud migration, we've moved out of the local data centers that we were using in hardware that we aren't part of that migration included us becoming, you know, what's deemed as cloud native. And we started using all of these different data stores and data structures, and dealing with all of them, it's actually complicated. You know, in from a developer perspective, it's, it can be a bit painful. So, you know, when we started looking for how to make our world simpler, but also keep it very, very fast, we looked at the real stack, and honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? Like we're not, we're not right the way down. We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis. And we're not, we're using persistent data, but we're not using it as a as our primary data. So we, the issue we obviously have is, you know, we've got legacy code, everyone has legacy code that they're dealing with. So we still have that traditional relationship, relational data that we need to access, then from lots of different places, and rewriting that for us isn't feasible to do it in one go. But we can do a chunk at a time. And the chunks that we're doing other ones, we're having performance issues. And that's working really, really well for us on that journey that we're on.
Darryl Taft 9:35
Okay, thanks. I think that answered about two or three of my other questions. I appreciate
Alain Russell 9:41
it. Happy to expand on them.
Darryl Taft 9:45
Back to Tim, you mentioned something called T digest. What is that?
Tim Hall 9:49
Yeah, so tea digest. It's a technical name for a data structure type for how the data is organized and stored, right and people use it for a wide variety of reasons. As I know, in my previous company, we actually use T digest, to compare data on disk in two different locations to see whether or not rather than trying to compare line by line by line record by record, when you're dealing with millions and billions of data points, potentially, what you want is some sort of summary, that gives you a rough sketch of, of the payments you've got. And T digests as a type of data structure where you can sketch it out and store that sketch. And then you want to retrieve it and do comparisons. And it's quicker than trying to, you know, say, you know, retrieve every single data point or summarize every single data point and get a discrete answer, if you will. So, yeah, that's, that's, that's the rough description,
Darryl Taft 10:38
I wanted to ask you about? How do you manage the open source project versus the commercial offering? How do you how do you deal with that? And what goes in?
Tim Hall 10:51
Yep. So that's a great question. By the way, this is my third open source company that I've run products for. So previously, I was at Hortonworks, as part of the Hadoop ecosystem and worked on a lot of different Apache projects, in terms of understanding how that worked. And then previously, it was head of products, that influx data, the time series, data platform company, and now Redis. And here's what I can tell you, not every open source company is the same, right in terms of how you make decisions about what lands in your commercial offering lands and open source where the contributions come from who's involved. If I go back to the days of Hortonworks, the Apache Software Foundation is a large meritocracy. And so, you know, you're, you're constantly interacting with folks from different organizations, and that have different opinions about the direction in which individual components should go you as a product manager, then or maybe the head of products is looking at how did this contribution sort of aggregate into the thing that you're trying to put in the market and how you differentiate against others. And that can be that can be a real challenge, frankly, you know, in the Hadoop ecosystem, there were two models Hortonworks was was 100% Open Source, everything was available, everything was an Apache Software Foundation project, and then their competitor at the time. And by the way, this companies have merged with Cloudera and Cloudera had a closed source model where the bulk of their platform was open source, but then they had some closed source commercial pieces that they built on top. And so that's a way to differentiate an influx every committer, being every person that was allowed to write code, and committed to the core platform was badged was an influx data badged employee. And while we accepted contributions under the MIT license, and the contributor license agreement that's associated with the influx, DB and telegraph and the other projects that they had, they were all reviewed by other developers, obviously, and we reviewed them for commercial interest. So if there was something that somebody wanted to contribute, that was going to go against our commercial interest, we probably not would not merge that. Now they're there, of course, the project and apply that commit to their own fork, but it doesn't necessarily mean that we're going to accept it. Now, Redis was run by Salvatore Sanfilippo, for many, many years. And he was this the sole arbiter of what landed and what did not land in, in Redis, itself. And what we've done over the last couple of years is we have a core steering committee, it's made up of one individual from AWS, one individual from Alibaba, and three Redis employees, who look after the contributions that are coming in from our open source community members who want to contribute those things. And then we reconcile a what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized, and that we want to push downstream into the open source offering. And so the thing that you're asking about is sort of my core and existential challenge all the time is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors to show up, and then over time, making sure that those things that do become commoditized or maybe are not as differentiating anymore, I want, I want to release those to the open source community, but it is a, you know, this upstream downstream kind of challenge is, is something that we're constantly working through,
Darryl Taft 14:27
following from your perspective, what's the value of a commercial vendor like Redis? When do they provide on top of a popular open source project? Do you see?
Alain Russell 14:38
I think, for us, like one of the one of the big decisions that we we wanted to go through in our cloud journey was we, we don't want to run anything, right. So that's the benefit of infinite scale, but also managed services. From a from a realist perspective. We just have to worry about the development side and accessing our data and dealing with it at that point. What we don't need to do is worrying about servers and scaling and shoving in all of the things that we actually don't really want to have anything to do with, you know, as a huge benefits in a world where we need everything to be real time, and we need to have answers really quickly as we have a support team with released on slack at the moment, we have conversations with them, as we have things that come up, and they essentially can give us answers pretty quick. So, you know, often when we're hitting a roadblock, or we don't understand how to do something, or we're looking for guidance on the best way to do it, you know, the people that we're dealing with, they are all ex, were the people that we deal with, anyway, are all ex Reddit users from different companies, you know, so we've got a great technical account manager at the moment at redist, who actually worked at an E comm company doing search and personalization, which for us is a resource that, you know, we've we've happened across, and it's great that, that she's around, but it's a it's a perfect resource for us to tap into and say, Hey, here's some things that we're trying to understand or what we're trying to do with search or with personalization, can you help? And that can be, you know, off the off the record comments, which are really useful for us, or it can be white papers or, you know, links to documentation. And I think that, for us is probably the one of the main benefits. And the fact that we have a, you know, a set of databases that we have available. They're just there, we don't have to worry about what goes on with them in the background. So though, yeah, they're essentially fully managed for us, which, which I think in a cloud world is exactly the same as using, you know, manage that, yes, or manage Dynamo. It's just, we just want to worry about the deep side of what we're doing.
Darryl Taft 16:44
So I'm interested in your perspective of redis as a customer, and how did you decide to use Redis?
Alain Russell 16:54
Yeah, cool. That's such an easy one. So we, I think I touched before, so we, we were open source Redis customer, right. So we we started a journey where we use Memcache to speed up data, we migrated to read us, when we went into AWS cloud, and we use elasticache Redis instances in our world, we abstracted that away in our E commerce platform to be basically, you know, see the case value get a case value. So we weren't using a lot of the extra data structures that that are available. And we essentially ended during COVID, I guess we started running into scaling problems. So we had issues with writing, predominantly. So he can, you know, as a technology that you're trying to run it very, very right up. You know, we're dealing with wish lists and people adding to carts and data changing all the time. And we kind of run into problems where you're, you're essentially scaling your databases to handle that, right. But it's costing a fortune, like and that's the reality is it costs a lot to scale for writing in the cloud world. So we started looking for different options to handle partly that right scale problem that we had. And we were also running into a problem where we we basically the data structure that we're dealing with on a product where it was getting more complex, and the way that our customers wanted to be able to define, you know, power product shows on the site, where it shows that data points that we have on it, we're getting more and more complex, and then a kind of relational database, that was getting very hard for us to get the data out quick enough. So we kind of went on a journey where we looked at all the different options. And we looked at, you know, no SQL options, we looked at Dynamo we we played around with elastic, we tried to get my sequel JSON, working for us. And they all kind of worked. But they were just as slow as our traditional data that we were playing with, like we weren't actually getting the data fast enough. In, we were performance obsessed, as a team. So our dev team are absolutely performance obsessed about trying to get the data out of our back end, we set ourselves a target of being able to get something back in under 100 milliseconds. So we were basically trying to get data out as fast as we can. And the beauty we had when we started testing rates was we, and we have some credit cards and real world examples. We live in a world where everything's measured and crazy, crazy fast. So we, you know, we see data that we traditionally would take, you know, 510 seconds to generate in a relational SQL query. And we we get that information now, then, a millisecond. And that's just purely based around the complexity and trying to deal with, you know, how customers are trying to find information and link that data. So this is all betting process that we had running. For us. It's how we categorize sites and how things show and that that is all essentially become real time. And the benefit for us in that is we're starting to now change all about you because we can show that data and we can show feedback to a customer as they're basically typing. So as a customer is creating, you know, collections or products or categories on the site. They can be adding In search criteria, and we can essentially tell them in real time what that's going to look like. Whereas in the traditional world, you would do it, you would say that we'd queue something up. And we'd, you know, we'd know eventually, in a few seconds what it was going to look like. But that may be too late. So it's given us like, it's given us a real different kind of approach now and how we're looking at everything, because we can say, actually, we can give you that in real time. And we can do it instantly, then feedback in the UI as you're playing around with different options. Yeah. And I think that's where we ended up as if it was the only thing that could give us the data as fast as we want.
Darryl Taft 20:36
Was there any impact on the legacy systems? And have you moved on from those?
Alain Russell 20:43
I think what I was saying, so we still have legacy code that we're dealing with, like the transition for us, as we're starting on our journey, we're starting to now use, you know, hashes and sorted sets. And TJ, this actually sounds really interesting, we already have a use case for probably that we were trying to solve the other day. So we are starting to use all the different things, what we're basically doing with it is, is trying to move parts of our logic across. So if you think about a product data set on a site, we may have a site that has 20,000 products, it's easy enough for us to move that into Redis and start using it and we use Redis JSON for that. So we use JSON and indexing. So we can move all of that into Redis and start changing parts of our front end and back end stack to reference that data directly. What we can't do is get rid of the legacy data store, because we still have another 60 70% of our staff that is expecting that data to be in a traditional SQL database. So we're basically working on a kind of a two different ways of accessing the data. But anything that there's a performance issue for us or is new, is now starting to access, you know, the JSON directly, yet, it makes life pretty simple for us moving forward. Because we handle the the writing into two places as the data, we ingest it. And then we know that everything is in one place. And I mean, it's powering our search now, for a lot of our searchers is using Redux. And we're building some tuning tools for customers that that will basically allow them to run different options and things but yeah, that that data structure for us is available, we use it for all of our shopping cart taken out. So essentially Redis is handling all of our shopping carts. And that is the only sort of that. So we're persistently writing that. And it's the only place that we we have a colony.
Darryl Taft 22:29
You mentioned JSON, I wanted to get into that a bit. Tim, why is Redis a fit for search and JSON use cases? You know, the purpose built search engines and JSON databases out? Is Redis approach these markets?
Tim Hall 22:46
Yeah, well, I think I think Elaine said it best. If performance and eliminating latency is the name of the game, then you should probably look at Redis as the starting point for that. Yes, there are lots of search tech and other tech that's out there. And there certainly are use cases, right where you may want to use those. But if reducing that latency, it's like an Elaine, you can tell me where I get get off the track here. But like, when you have an E commerce use case, you want that user engaged with the experience in real time. Because anytime that it starts to slow down, what do you think they're gonna do? They're gonna bail and go somewhere else. Yeah. Right. But so that interactivity is the thing that keeps them engaged. And whether it's type ahead, whether it's recommendations being generated based on what they've typed, whether it's, you know, other types of suggestions and things that you can do, it's that interactivity that keeps the user engaged, right. And so, you know, we feel like, again, if if you're working with JSON as a data structure, you know, the challenge with that is you could reference the document itself, right, as a particular key. And so, you know, previously, what people were doing with Redis is, is, developers were like, well, I need to speed up JSON. But since Redis, did not support JSON documents, until relatively recently, they would convert them into strings, stuffed them into Redis. And then pull them back out, reconvert them back to JSON, and then do the work on top of that. I cringe can't even tell ya, yeah, even telling that story. Because, you know, frankly, it doesn't fulfill the brand promise of Redis, which is I want the developer to be able to work with that document, you know, without those strings and versions back and forth. And so we added that capability. The challenge is that now, because it's a document, right, and it has a structure to it, there may be nodes in that document that he wants to index and that you would like to create a query experience on top of right, you may want to string pieces of data together. We've got some customers who are storing, they're using it almost as a pivot data fabric. I know this is gonna sound very strange or data mesh Where they've grown through acquisition. And they're like, how do we how am I going to present a single unified interface to my end customer, and they meet the customer have all of these different acquisitions that we've now pulled together, I can't rewrite those systems quickly, what I can do is give them a presentation layer, that will take advantage of the fact that their data resides in all these other systems. But publish it into Redis, as that is a JSON data structure, make the customer identifier, you know, either common or key across them. And then when that customer shows up, pull that data for them and present it to them quickly, leave those existing systems in place, just as Elena was saying, like, you're gonna have to, there's other things that rely on that data. But what you can do is you can evolve the fried them with an experience that is interactive, provides them access to that data, and keeps them engaged with you. And so I think both the query capability that we added on top of Redis, so it's not just key value anymore, this idea of secondary indexing on top of either hashes or JSON documents is super powerful for people to take advantage of, to obtain that, that real time speed and reduce latency. So to me, I think it's it's a huge win getting, you know, folks like Elena and the blackpepper team to talk more about it, and how it's worked for them as part of certain awareness that we need to continue to do to show people that can be successful with that and that we've, we love caching, and we're caching and beyond,
Alain Russell 26:25
I think, actually just add to that, for us, one of the big things about JSON is we can write JSON, right, so we write JSON directly to read it, what we find really, really powerful is the fact that we can change it indexes and change our full text searching on the fly, we can add an index. And, again, I don't understand the physics behind it. But it's instant. So we can add an index, it's there instantly, and we can query it. And it doesn't seem to be affected by the size of the JSON, we're storing by how many objects we're storing. Yeah. So it gives our development team a really quick way to test things, and try to get things working without having to reindex and restore data and work through Okay, now we have seen that JSON differently, because we want to do something different with it. The beauty about the way that read exported text works is we essentially can change those indexes and change those queries. And it's the same data in the background. We've stored it once. We don't have to do anything more with it. And we're just scratching the surface of it. But yeah, the the power that we get from that, and the ability to try things quickly. Yeah, making a huge difference for us.
Tim Hall 27:30
And that's a really different experience than then other quarters sort of purpose built full text search engines where they're not, they're not tuned to the developer experience. They're tuned to I'm dealing with, you know, hundreds of pages of PDFs and other kinds of documents, right? It's not, they were built to solve a different kind of problem. And I would say Redis is well suited to the kinds of problems Alena and his team are, are trying to solve, which is, it's a Witten like, but yeah, you know, there's still is horses for courses kind of argument to be made. Like, I'm not saying like you should replace every possible Full Text Search use case with Redis. i i That would be both naive and a huge overreach. But in the in the kinds of environments and the kinds of use cases I think that Elena teams are finding themselves in seems like a great fit.
Darryl Taft 28:22
Yeah, good stuff. So for Tim, can you give some examples of where you see the greatest interest in Redis for search, and how strategic is ready search.
Tim Hall 28:34
So our query and searching capabilities are actually fundamental to where we're going. I think it's a super powerful engine, I think I think that the explanation stands on its own. I think the things I need to do for Elaine and team is to make it simpler. So we know that people are having to do right behind use cases, and but he's having to do that pretty much on his own, I'm assuming today. So we're going to introduce a set of data integration capabilities to support right behind so that you can keep Redis in front. But because you have to keep some of those other legacy systems alive or there's lower latency or slower use cases that you need to support that you she may not need the data in Redis, I want to provide a separate persistence mechanism to allow you to write behind a Mongo or whatever else you want to have behind the scenes or MySQL, etc. So we'll make sure to brief you on that. We're also looking at the reverse, which is a lot of those systems today also support what's called change, data capture. And change. Data Capture is essentially a log of what updates are hitting the underlying database, we want to be able to take the Change Data Capture log and turn it into Redis data structures that you could access the other way. And so that's a that's all part of our sort of data integration set of capabilities that we're working on now. The plan is to get to public preview of some of those capabilities early in 2023. And so stay tuned to see where we're going but I get what you're getting at is You know, JSON being a first class data structure, a query capability being a first class engine that works in real time and speed. And that what else can we do to extend that into the future as we're looking at supporting exactly the kinds of environments that Elena team are running into? Make it easier for them to deploy with with less technology?
Darryl Taft 30:18
Sounds good? Yeah, we're almost at time. And I haven't even gotten to air for my questions. So Can Can we do a lightning round of sorts? Sure. One thing for Tim, what does the acquisition of rest dot app or however you pronounce it, grinder? Redis? Yeah.
Tim Hall 30:38
So this is our second acquisition of developer tools. And we are eager to consolidate what's gone on in the sort of developer and open source ecosystem related to assistive tools for debugging and understanding how applications are working with Redis. We actually acquired assets related to already be tools, we're building reticence sites, and and now RESPA app, not only the assets, but the core developer behind it's joined our team. And what we're trying to do is drive a single consolidated developer experience as a companion app to your IDE that helps you understand what's going on inside of Redis, as you as you build your apps. And we're super pumped about that acquisition. And what it brings to us not only from an entrepreneur that was sort of living in the space and trying to drive drive the the best developer experience, but we want that experience to be free and available to all developers so that they can feel powerful, and they build solutions on top of the platform.
Darryl Taft 31:33
What were some of your key competitors? And how is Redis measure up against them?
Tim Hall 31:38
I'd say the biggest thing, from my perspective is folks that are taking our open source and deploying it as a managed service within their environments. So and the biggest one that I can think of just comes to mind would be AWS. And I think the thing that hurts me a little bit is that there's a lot of confusion out there about AWS services not being Redis, right? So people say, Oh, I'm using Redis, I'm using Redis. And it's like, well, what are you using? And they say I'm using elastic cache. And I'm like, well, that's, that's effectively their fork of redis open source. But when you look at the investments we've made to build up from open source to Redis Enterprise as our commercial offering, there's a lot of stuff they don't do. But that equation, right where people are saying this equals that is a challenge for us, because it's not the same. And we're finding customers certainly as they go on their journey from the beginning of, hey, I'm in AWS, I might as well just check all the boxes and use all their services because it looks like x y&z But it's a lot of assembly required, it's kind of the Christmas Day scenario, you open the box, and you spend the rest of the day assembling all the parts, the challenges as they scale, you may not get the best operational experience. And I think to Elaine's point, like good luck, when you contact them for support about what's going on. They have no clue about the innards and your when you come with us, and you can use Redis Enterprise, you're gonna get the best of what we have to offer for companies that are deploying, you're gonna get the best support experience, I can tell you, our support team is amazing. There's they're still teaching me things about the product, frankly. And they're second to none in terms of supporting our customers and making sure that those questions get answered and our customers can be successful.
Darryl Taft 33:23
Just a couple more, what is red? Is it doing about adding AI ML to the platform?
Tim Hall 33:31
I'm glad you asked that. That's a very interesting question. So there's a couple of capabilities that we started adding. The first is we've been communicating about a core use case, which may not surprise you about online feature store. So an online feature store, effectively four ml models means you have fast access to the underlying data that you're trying to model against. It's a caching use case, effectively. So that's in our sweet spot, right. And then what we've been seeing also is that people are starting to use more and more sort of semi structured data as part of their machine learning models. And so one of the capabilities that we're adding to our search engine is vector similarity search. And so what that what does that mean, that sounds like a bunch of techno mumbo jumbo. But essentially, you can take images, and you can turn them into vectors, which is a sort of stream of numbers. And then you can use math to compare those vectors to each other. So you can do things like hey, I have a red shirt, a blue shirt and a green shirt. And when somebody wants to come in and say, Well, you know, I want to look at shirts of, you know, different sizes and shapes and things, the vectors will actually give them information that can be searched upon that can allow them to pull those images back and present them extraordinarily quickly. So that's a capability that we're adding to support. Again, folks that are doing recommendation engines, machine learning activities and AI extensions. And so yeah, so those are two of the things and then our team is also looking at how on what's going on with the AI and ML pipelines, and where we might show up and integrate more naturally with, with folks. And I think we're finding some some areas where there certain data structures we don't currently support. We might want to to make that easier. But I can tell you Yeah, for sure. The we've had a couple of folks recently, I food down in South America was talking about their success with the online feature store capabilities of Redis. And we're going to look to continue to extend in that direction.
Darryl Taft 35:27
One last quick one I want to sneak in is what can we expect to see from Redis? In the next year? And for Earline? What are some of the things you'd like to see?
Alain Russell 35:40
Yeah, I can probably start actually, it's I guess, the thing for us is read as a director writing faster than we can adopt at the moment. So that's the only thing that we can ask for. So we don't want to ever be in a position where we waiting for things we would find useful. So currently, the the rate that new functionality is coming out as faster than we can make use of it, which is a great position to be in for us. I've got I've actually got papers on all the vector similarity stuff because you know, for for econ, these products look like those products is a really interesting use case for personalization and search. So, you know, we've we've certainly got a lot on our roadmap that we're going to be taking a look at next year. But we can ask for the is that functionalities as arriving faster than we can use it.
Tim Hall 36:23
Darryl Taft 39:35
That's good, man. I love it. I really appreciate you taking the time. I want to thank the audience, the listeners for taking the time to listen to another dynamic conversation with the new stack. As always, we appreciate your support and feedback.
Colleen Coll 39:49
Redis provides a competitive edge to any business by delivering open source and enterprise grade data platforms to power applications that drive Real Time experiences at any scale, developers rely on Redis to build performance, scalability, reliability, and security into their applications.
Alex Williams 40:10
Thanks for listening. If you'd like to show, please rate and review us on Apple podcast, Spotify, or wherever you get your podcasts. That's one of the best ways you can help us grow this community and we really appreciate your feedback. You can find the full video version of this episode on YouTube. Search for the new stack and don't forget to subscribe so you never miss any new videos. Thanks for joining us and see you soon.
Transcribed by https://otter.ai