The New Stack Podcast

Hachyderm.io, from Side Project to 38,000+ Users and Counting

Episode Summary

Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab. Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users. And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded. “The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.” Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20. Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm. This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it. Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.

Episode Notes

Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab.

 

Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.

 

And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.

 

“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”

 

Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20.

 

Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.

 

This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.

 

Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.

HugOps and Solving Storage Issues

Suddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding.

 

“Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.”

 

Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.”

 

DigitalOcean helped with the storage issues; the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been living in Nóva’s basement lab.)

 

Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems.

 

“This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’

 

“In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”

Sustaining ‘the NPR of Social Media’

One challenge in particular has nudged Nóva to seek nonprofit status: operating costs.

 

“Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows.

 

“The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.”

 

Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter.

 


 

Feedback? Find me at @hajoslyn on Hachyderm.io.

Episode Transcription

Alex Williams  0:08  

You're listening to the new stack makers, a podcast made for people who develop, deploy and manage at scale software. For more conversations that articles go to the new stack.io All right now on with the show

 

Heather Joslyn  0:33  

Hello, and welcome to another episode of the new stack makers. I'm Heather Jocelyn, the Features Editor of the new stack. And today we're gonna talk about Twitter Macedon, hacking term and solving the good problem to have how to scale super fast to meet user demand. One of the biggest tech stories of 2022 if not the biggest was the sale of Twitter to Elon Musk for 44 billion with a B dollars and the subsequent changes to the social media giant including deep staff cuts, and a reported rise in hate speech on the site. Many Twitter members sought an alternative and landed Mastodon free and open source platform for running self hosted social media networks. In this episode, we'll talk to Chris Nova Principal Engineer at GitHub, high Nova high and Nova hotels how and why she created hacky Derma fast growing server on Mastodon aimed at the tech community. And we'll also talk to Gabe Monroy, Chief Product Officer Digital Ocean. Hi, Gabe. And Heather has gone, gone pretty well. And they will tell us about the role Digital Ocean played in the story of factor including helping it scale. Let's get started. First of all, Nova, let's set the timeline and our minds this fall you are on Twitter, you had a pretty large following and you still have more than 26,000 followers. When did you decide to leave Twitter? And was there a moment when you're thinking? I'm out here?

 

Kris Nova  1:53  

In all honesty, I don't really know how I ended up in Twitter in this situation I did. I think when when Gabe and I met in 2015 or so I was a relatively small Twitter user who was just using it for fun and somewhere along the way, and things got out of control. And they started to get 1000s of followers and people started to take my opinion fairly seriously. Earlier this year, just in general, I had been having some issues with the platform and just finding a way for me to express myself in a healthy way that seemed to align with the direction Twitter was going. Earlier this year, you know, it was announced that Ilan was going to buy Twitter and I was already having some issues with the platform outside of all of that. And I think I've always been a larger advocate for open source software and open source anything in my life, whether it's hosting images for my family, or social media or anything, I always love a good piece of technology to go and explore and try out and I have a home lab behind me. So I think it was fairly natural for me to try Mastodon, I think it took the the actual sale to go through and just start seeing some of the impact of what Elon was doing at Twitter before I finally said, I'm out here, I think that it kind of happened in about two phases. The first one was, Hey, everyone, I'll be on Mastodon if you need me, and I'll kind of do both for a while. And I think the moment I started seeing him reinstating Trump's account and starting to like stoke the fire for some of the like the anti transgender commentary that I've seen, it was kind of enough for me to just go Alright, enough enough, I'm done contributing and putting my content here on this platform, especially if it's going to just put more money in the hands of a billionaire at this point. And we already had the mastodon server online, so it was pretty natural for me just to finally say, Okay, I'm out of here.

 

Heather Joslyn  3:29  

So you moved to Mastodon and created hacky DERM. At the same time or

 

Kris Nova  3:34  

so hacker terms interesting. We started the server in April of this year. And it was just like a small like fun hobby project, it ran on the server rack behind me here on the video, if you can see it. And it kind of started off with me and about 20 of my closest friends and family. I do like a Twitch stream in the morning, which is where I carve out a small portion of my day to go and prototype and play with new technology. And Mastodon was something that was on my radar, you know, we've prototyped 20 or 30 pieces of technology over the past year or so. And this has just happened to be the one that kind of blew up. And so the the server started, like very small, but it was online. And I think one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.

 

Heather Joslyn  4:24  

Wow. So it's grown to 30,000 already. That's amazing.

 

Kris Nova  4:28  

Yeah, I think our last report was 34,000. I think it's slowed down quite a bit in the in the past 30 days or so. But there was a time there where we were growing really quick, but we're definitely above 30,000 at this point.

 

Heather Joslyn  4:40  

You wrote an O'Reilly book called cloud native infrastructure. So clearly, you're you're familiar with scaling. What were the challenges that you encountered is as it grew so fast? What are some of the milestones that you hit in terms of meeting that demand

 

Kris Nova  4:55  

on the technology side? So I feel like this is like such a Jurassic Park style answer like we I had all the problems of a theme park and dinosaurs, but like, we had all the major problems of a normal, like modern day distributed application system or distributed service. And we also had the Twitter migration, right? Like it was just it kind of all hit at the same time. And so on the technology side, we ran into, you know, how the mastodon application is structured? And like, how does the Ruby code relate to the database? And what are the assumptions that it made? Gabe, I'm sure we'll be able to tell you a lot more about this than I will. But his background and my background like we worked on Kubernetes at Microsoft together. And so we're all about that horizontal distribution life. And here I am operating a Ruby on Rails monoliths that's designed to be vertically scaled on a single piece of hardware. And we're trying to break that apart and run that horizontally across the rack behind me. We got into a lot of trouble very early on, but just taking the the service itself and starting to decompose it into microservices. I think that the one limiting factor we hit from the technology side is our disks. We had some non enterprise but the consumer grade SSDs and the raft behind me, and we were we were doing on the order of millions of reads and writes per day, just keeping the postgres database online. And that was that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up. So that was that was definitely the first thing to go on the human side like we now are, we're moderators. We're basically recreating the small instance of Twitter, everything that Twitter had to do we have to do we have to deal with issues and reports and people have questions and concerns and there's laws and there's there's a lot that goes into it. And we hit those challenges as well. At the same time.

 

Heather Joslyn  6:35  

When you say we how many people are having to be de facto Mater moderators, I did want to dig in a little bit to what that process has been like, you have this social network and not everyone is going to play nice. I got a message on return on your server and it's their rules right away that you have to agree to. But how many people do you have helping you with a moderation

 

Kris Nova  6:57  

we have two teams that kind of keep track of Derman live today, one of which is the infrastructure folks. I'm kind of the the leader of that group and then my partner Quintessence ox she spearheads our moderation effort in our community effort. And so I mean, this is just like, as most I guess, in our case, it would be mom and mom, but like mom and pop style social media services you can imagine, right? We're partners, we decided to host this thing and to get involved with it. And you know, it started out as like me and giving some SSH access to a couple of buddies of mine to like, now we have an entire group of folks who are creating discord channels and rolling out programs in partnership with people at DigitalOcean. We've partnered with tail scale and honeycomb, and it feels like we're operating a small startup here. We have to deal with people in burnout and time management, legal concerns and like it's a whole thing

 

Heather Joslyn  7:47  

gave when did you enter the story of hacking Chairman sounds like the desk situation in the bringing this monolith into micro services? Obviously,

 

Gabe Monroy  7:56  

before that, Heather? Yeah, I'll just say that I entered the picture. Because as a user, you know, and I was a user, and I was video posting on the community and Nova and I go back, as you mentioned, a long ways. And I think we've been working together on and off since like, 2016 2017. And from Deus to Microsoft. And we've kept in touch since then. And one of the things I've always really appreciated about her is her ability to sort of lead and pull communities together in lots of different forums. I've seen her do that kind of throughout her career. And so when she stood this Mastodon instance up, I was like, Well, hell, yeah, I'm along for the ride. So, you know, set up an account. And so I was posting myself on the server. And so I was actually one of the people who noticed some of the issues, right. And so all of a sudden, I'd be on there. And I'd be trying to post and there'd be a 500 error in the, in the apps that I was using. And I was like, oh, gosh, and then I started to actually notice in the community, people were complaining, you know, more broadly about, hey, look, there's issues going on. And I was like, Oh, I know what this is. This is a success failure in the making here. You know, the scale of this is is sort of overwhelming. So I just kind of like texted Novem was like, Hey, what's going on? Like, anything I can do to help? And yeah, this is, I don't know, in the community, we like to talk about the concept of hug ops, right? So idea of like, hey, when people are having issues on this stuff, reach out and try and help get, you know, give a hug. And so that was all I did, and novos, very crisp and clear. And like, this is what I got going on. These are the issues. These are the areas where we can help and I was like, Well, hey, look, I think you do could maybe provide some help on the object storage front and offload some of the storage stuff to spaces. There's probably a lot more we could do. But let's focus on putting out the fire. And once that fire is out, we can figure out if there's any ways we can help beyond that.

 

Heather Joslyn  9:39  

So what is the back end look like now? It's you got from DigitalOcean. Yeah, I

 

Gabe Monroy  9:43  

think Novan team put out some really great blog posts and we put out another one on the Digital Ocean blog that goes into some more details on this. But one of the challenges that the team had was people are uploading photos and media along with the status posts on the site like I myself was doing it and And that was hitting these discs that Chris was mentioning here. And I think the issue that we started to see was, hey, is there a way to offload that to something that's going to be a bit more scalable, and Digital Ocean spaces is the object storage solution that can provide that capability. But that wasn't the interesting part. The interesting part was how the team built it. And there was this really novel nginx hack with this try files component where the team was able to essentially write data to DigitalOcean spaces, while also sort of moving the data in tandem, like almost having customers help move the data over time into object storage and and really work through the scale motion. And from the outside looking in, as I was doing your bit on this, it was just marvelous to see such a talented team of system operators using really creative techniques to put the work on scaling this infrastructure. And so we wanted to blog a little bit about it. And I think I was really happy to see some of those techniques share with the broader community.

 

Heather Joslyn  10:56  

And what will also link to some of those bugs in the podcast notes. Is there a particular solution that you arrived at that you kind of want to brag about, you want to say this was a cool thing that we that we were able to figure out,

 

Kris Nova  11:07  

I would love to give a little bit more clarity on on the try files situation and talk about the specifics of how Digital Ocean was able to help. And I think also, just like while we're on the topic, this is the first time I'm kind of hearing like what it's been like from games experience. And I think this is very, this is very indicative to the problem I alluded to earlier where like a couple of my buddies, casually the CPO of Digital Ocean decides to go and post chicken pictures and now all of a sudden, like he can't go put his taken pictures online. And here we are right now. And we're having this like distributed systems conversation,

 

Gabe Monroy  11:37  

it was chicken videos. So it was even worse than that

 

Kris Nova  11:41  

gave couldn't put his farm videos on Mastodon and, and now we are we've switched over to Digital Ocean object storage. This is a really good story here. You know, we had spun up a small hobby server. And like all successful projects, we got into this thing with the best intentions, like we weren't expecting 1000s of people to show up. When we set it up. We were like, Yeah, we you know, carve out a few terabytes of setup fest file storage. And you know, we'll give it to the media share. And we'll carve off another terabyte and give it to Postgres. And we'll run everything on the same disk, why not like it's a small time, hobby shop. And then it kept growing. And when these things start growing, like it's already too late at that point to really make like any big monumental changes to the underlying infrastructure. And so we had very quickly like, within a matter of two weeks gone from a small hobby server to like, we now have a full fledged production crisis on our hand. I think the thing that kept us moving forward was like the fact that it was the broader tech industry using this thing. I think if this would have been like my mountain climbing community, or like Gabes, rancher community, like it probably would have gotten a little less attention than it did. But like we had some relatively big hitters trying to move over off Twitter. And we knew that we needed to actually get the service to a sustainable point, the real problem here was that the whole state for the service was stored on one server, which is behind me and the rack here. And they were that was ready distributed across eight SSPs. And so when one service, whether it was Postgres or our media shares started to slow down, we would see that cascade into other parts of the mastodon app. And Mastodon is very prone to cascading failures. And then that would eventually make its way out to the edge where folks would see five hundreds or their images would kind of upload or not upload, or they would try to redo it. And it would create a duplicate post, there was like a lot of issues we were seeing, when you look at the big difference between object storage and block storage, object storage takes like the same primitive and is able to distribute it across multiple machines so that you don't run into these types of problems. And that's exactly what Gabe was able to offer for us, which was like, hey, just point your application at this in Digital Ocean and relieve some of the pressure. And that was actually the magic move that was able to give us enough like free space and free CPU cycles on our server to actually finish the rest of our migration out of my basement here and get it into a proper data center where where it now lives today. I think what Gabe was talking about with a try files nginx proxy is a really, really good example of like leveraging modern day technology to solve an interesting problem. And the way that works is you can tell nginx to look for a file in Priority location and serve it and cache it from there if it exists, and if not proxied that request off to like a different location. So what we were able to do is we were able to set it up in such a way that anytime somebody looked at a photograph or an image on their Mastodon page, it would come off the server in the rack and it would exist in the cache on the that what we call our CDN proxies on most globally distributed around the world. And then that would slowly sink itself back to Digital Ocean on the back end and permanently live there. And nginx would then just be smart enough to be able to try Digital Ocean first moving forward and would never actually look for that image back on the rack where it was. So it took us about two weeks after Gabe had texted me and was like, Yo, what's going on, I can upload my chicken videos. And you know, it took it took, you know 1.4 terabytes of data to get transferred both to a data center in Germany and to Digital Ocean. And now that's where it lives. And I think it was three nights ago, we actually were able to get on the box and check and make sure that everything was finally transferred. And like it was kind of like one of those office space moments where it was like, Alright, we're gonna actually pull the plug on the server and take the thing offline, for the first time since April, since we spend the match on server. It was a long process to get here. But I really don't think we would be here today, if it wasn't for, like our ability to leverage do and get some of our media files out of out of the rack as quickly as we can.

 

Gabe Monroy  15:25  

And yeah, I just have to say I was really impressed with Novan team's ability to just like move speed and purpose. I mean, at the end of the day, DigitalOcean just provides an endpoint to write object storage to so like, you know, the work on our side was was fairly small, but Nova invited a team of engineers and folks from our end into their discord where they call some of the plays for this stuff. And the team moves really quickly to like, test this out in staging and start rolling this out. And obviously, that amount of data, you know, terabytes of data can take just, it takes a minute to, to to move all that stuff. But it was really cool to see the team of folks working with speed on this. And what was interesting, too, is, you know, a lot of the folks on the Digital Ocean Side who were just sort of advising and helping the team along just to make sure everything was smooth. They were also members of the Hackett DERM community too, right. So everyone tried to have this vested interest in seeing, you know, I mean, they don't all have chicken videos like me, right? So I was extra eager for this. I think Nova mentioned this before, this hacker term community has turned out to be a pretty important community. So far. And and I think for me, it's been really fascinating to see the quality of individuals and discourse. And I don't know, it feels very different than Twitter feels. And I like to think of it almost like ever back in the early days of Facebook, when Facebook was like, just your actual family and friends. And the feed was like just the stuff that you care about. And you can really connect from a social media perspective, and it brought value and like joy to your life. That's what hackathon feels like to me, and Twitter does not employ and no, that's what she's going for and what the community is going for. But I think everyone who's involved in the community feels that to a degree. And so we were all bumped when the scaling was was an issue, myself included. And I'm just thrilled to see the team coming out on the other side.

 

Heather Joslyn  17:18  

Has the rate of people joining hacking terms slowed down plateaued sped it kept going faster, is it Where do you stand now

 

Kris Nova  17:27  

with that? Great question, Heather. So we have graphs. We're definitely in love with our ability to to build some like open source observability tools here. So right now, if you go to Grafana, dot hacker derm.io/public. You can see there's like there's like active graphs. And anytime somebody joins the service, you can see like the service count go up, and you can see however many statuses we have. But to answer the question, we have absolutely slowed down throughout the course of like the end of October throughout most of November, that's when we got the bulk of our new users to hackathon. And there was one day in particular where I think we got like 1400 new users in a single day. And so today, like we're seeing, like, maybe 1000 Join, like every couple of days or so. And like we you know, we'll watch it go from 33 to 34,000. And those are like small milestones within their own right. But I mean, that first month, and like the early weeks of November, it was just like, I would like go to bed and there'd be 19,000, I'd wake up and there'd be 24. It was just kind of like, oh, this is this is a lot. So we've seen it slow down. I think that like we want to continue to allow folks to join. And we're trying really hard to keep that like small business, community vibe, sense of social media alive. I think that one of the things we learned with Twitter, just observing it and being a Twitter user is that like when you get to the point where it turns into a product, and it turns into like a revenue seeking corporate product, then like, there's going to be some trade offs there. And there's not necessarily anything wrong with that. Although I think the name of Twitter, like it became very obvious very quickly that Twitter was a vehicle for money. And I think that that came at the expense of users like myself and hakodesh. The whole goal of it, as far as I'm concerned is to keep it as sustainable as possible. So that we're not having to like offset the get the operating costs with you know, ads or marketing or product marketing, we can just try to keep it as kind of like neutral and, frankly, boring as possible. Right there. He the NPR of social media, if you could imagine such a thing. That's a good

 

Heather Joslyn  19:20  

tagline. Yeah, I did have a question. We're talking about the servers in the data center in Germany and all that, like, how are you able to cover operating costs. So

 

Kris Nova  19:30  

right now I'm able to just kind of like eat the cost myself, I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service that was kind of the model originally, I think that the homelab Behind me is something that I was able to just pay for and just continue to operate on. And now that it's actually a service, we're having to figure out ways of like supporting it and funding and being transparent. So I haven't actually made an announcement yet, but I'm more than happy to like share with both of you why we're here. The work that we have done with Digital Ocean and the other core Patients that we've been partnering with have inspired us to form a 501 C three. And so we're going to be announcing a nonprofit in January. Yay. Exciting, right? And the hope for that is so that we can begin to look at funding models and figuring out a way to offset the cost and track the budget and the cost of this thing in a transparent way and kind of give it back to the community.

 

Heather Joslyn  20:19  

That's great. Do you anticipate having to make any changes in the coming months, the tech stack is continuing to get more users and obviously, once this podcast is out with our vast global audience. But an additional attention from from other media will continue to grow? Do you feel you've got us a system for scaling it now in place,

 

Kris Nova  20:43  

we have the initial fire put out, so like all chicken videos are good to go for the foreseeable future. However, like we still have a backlog, right, there's things we need to do. And I think that priority in my mind right now is get the 501 C three formed and start to identify our like our revenue streams in our in our budget for the year, I think that we're going to be entertaining the idea of doing some sort of like open science based experimentation where we can like bring on different products and prototype them at scale, and actually bring folks on and use it as sort of a vehicle for documenting some of the raw computer science behind some of the products folks are interested in trying out. And our hope is that we can we can figure out a way that's kind of beneficial to everyone without being invasive to our community that allows us to try out new observability techniques, or distributed scaling techniques or cloud offerings, etc, etc. So I think that's going to kind of cover the budget. As far as the technology goes, you know, we want to get to a point where we can bring in marginalized people, young folks to the industry, folks who are interested in making a career pivot folks who are interested in learning more about how to operate a large distributed global system, and bring them in and help them contribute to our day to day operations, right? If somebody was looking for experience to sit in the driver's seat, and like, what does it look like to wear a pager for one of these, like large, multi 1000, user large distributed systems? How do I get involved and this gives us an opportunity to kind of do that in a safe way with like, low risk to your average newcomer to the technical industry. So I think I would suspect anywhere, there's open source services that people seem to enjoy, there's innovation, we're gonna see, this is just my speculation, but I suspect we're gonna see all kinds of like open source projects and new ways of communicating with people. And you know, folks have been throwing around a lot of the original Facebook features, you know, how do we schedule events? And how do we document communities and like, what about marketplace, there's probably going to be a lot of that that we'll see in the fediverse. And in the coming year, I just hope that hacker terms is going to be able to support that and build to it very specifically, as far as our stack is concerned, we're probably going to get another four or five large servers in Germany. And so I'm here in the States for legal purposes, so we can have data residency checked off. And other than that, we'll probably try to keep it as boring

 

Heather Joslyn  22:49  

as possible. One more question, what do you think that Twitter story tells us about the role social media plays or should play in our lives?

 

Kris Nova  22:57  

This is an opinion question, Gabe, you want to go first,

 

Gabe Monroy  23:00  

I think Twitter has been an important part of the tech scene for a long time. And I think to me, personally, it's sad. It's just sad to see the direction it's taken. But like with all things that are sad, and maybe come to some amount of an end, in terms of expectations, that creates opportunity. And what I've seen with the hacker DERM community, it has surprised me in a positive way in terms of how impactful it can be in such a short amount of time. And my real hope going forward here is that folks of all stripes can chip in because at the end of the day, this is a community and if people don't pitch in in some way, whether it's money or infrastructure or your cycles, or whatever that is, it falls on the backs of the handful of people who volunteer their time, you know, to look towards the community and that to novice point before around sustainability that's not sustainable. I'm really looking to see like, can we showcase what community can do almost as a counterpoint to what centralized social media can do? I'm really interested to see if we can prove that out in the fediverse and sign me up for that. Heck, yeah,

 

Kris Nova  24:10  

I couldn't said it better myself. I'm here for it. I think I've learned a lot about the relationship of corporations and in product marketing to things like social media through Twitter, but like Gabe said, I wouldn't be here if it wasn't for Twitter, right? Twitter bought this house, right? Like I my career wouldn't be where it is. I wouldn't be where I am. My book wouldn't be published. There's a lot that like Twitter was there and a part of my personal journey. So it's like Gabe said, it's sad, right? I feel like I'm losing a small part of like, what made being me, there's definitely a chunk of my identity on twitter.com/chris Nova. And like, it's nice to be able to reinvent myself and to kind of like cross off some of those bad habits that I used to be in with Twitter. But at the same time, I think that it's it is a shift and it's not going to be a one to one with Twitter, there's going to be things that we're going to have to find new homes for and there's going to be new opportunities and some things that are going to be different now. I think as far as the commute Unity is concerned, I'm excited. And I deeply believe that we have a path forward where we can have an open source and collaborative social media footprint on the planet, frankly, where folks can come together and contribute in the name of science and altruism to just contribute to a service that we all love, and that we all enjoy and lifts lifts us all up and raises. What's the expression? The rising tide raises all ships? I definitely see a lot of that here with hacker term. And we're trying to figure out like, what is the balance? Do we bring corporations on? Do we do we let folks contribute? Do we not? And I think there's a lot of unanswered questions at this point. To be candid, I would just love to see more of the type of things that we're seeing from Gabe and Digital Ocean about how they can just come, no strings attached. Here's how we can help out and we're just here to help lift up the community any way we can. And that's, that's been super helpful and super exciting to watch.

 

Heather Joslyn  25:50  

Excellent. Thank you very much for your time, Nova and Gabe, I really appreciate it. Thank you for sharing this story. And I want to thank all of you for listening in. I'm Heather, Jocelyn for the new stack makers, and we'll see you next time.

 

Alex Williams  26:02  

Thanks for listening. If you liked the show, please rate and review us on Apple podcast Spotify, or wherever you get your podcasts. That's one of the best ways you can help us grow this community and we really appreciate your feedback. You can find the full video version of this episode on YouTube. Search for the new stack and don't forget to subscribe so you never miss any new videos. Thanks for joining us and see you soon.

 

Transcribed by https://otter.ai