DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years. “And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.” Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter. “We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.” Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast. Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City. This podcast episode was sponsored by AWS.
DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.
“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”
Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.
“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”
Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.
Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.
This podcast episode was sponsored by AWS.
For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.
A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”
That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.
That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”
Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.
More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”
As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety.
“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”
Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”
While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”
One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January.
GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.
“We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”
In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel, like need to do a better job of moving teams in the right direction as far as action,” he said.
At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers
“And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”
Listen to whole podcast to learn more about the state of software supply chain security.
Colleen Coll 0:08
Welcome to this special edition of the new stack makers on the road. We're here in Q con North America and discussions from the show floor with technologists giving you their expertise and insights to help you with your everyday work. Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from datacenters. Globally, millions of customers trust AWS to power their infrastructure become more agile and lower costs.
Heather Joslyn 0:46
Hello, everyone. Welcome to this on the road edition of the new stack makers Podcast. I'm Heather Jocelyn Features Editor at the new stack. And we're coming to you from cube con plus cloud native con North America here in Detroit, the Motor City, Detroit, a city that was built on making things and on the robust supply chains that delivered parts and shipped out gleaming new vehicles and still does. Today we're going to talk about the software supply chain and how we keep that safe. And that's an increasingly tough challenge. We're joined today by two guests who are working to help developers secure the software they make. We're joined first by Eva Black, who currently works in Azure is office of the CTO and hold seats on the board of the Open Source Initiative on open SSH EFS Technical Advisory Council and the shadow seat on the board of the CN CF, the Cloud Native Computing Foundation. Welcome, Eva. Thanks so much for having me. Great. And we're happy to have you join us. And we're also joined by Chris short, a senior developer advocate at Amazon Web Services AWS to its friends. Chris is a Cloud Native Computing Foundation, Ambassador Kubernetes contributor and upstream marketing team member you may know him from the popular newsletter DevOps ish, which he founded. Welcome, Chris, thank you very much for having me. And today's on the road episode of makers is brought to us by our friends at AWS, let's start with this. Why is the software supply chain increasing amounts of danger? And what are some of the factors that are making it more dangerous?
Aeva Black 2:16
Well, I've been doing open source for I guess, about 2324 years now. And in that time, we've seen so many more people contributing so many more companies participating and consuming open source. And especially in the past five years, a really exponential growth and all of that participation was wonderful. But a lot of the best practices that the older community has Debian Linux kernel Red Hat had developed, those best practices that helped to secure those sort of foundational projects weren't replicated as much by modern two lane by modern projects, which are much more focused on you know, agility go fast build community, rather than build secure software. And now we're playing catch up, right? So a lot of a lot of sort of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now. Yeah, I
Chris Short 3:07
think the biggest thing to take away is that if security is everybody's job is nobody's job. Right? So we've gone through this evolution where, oh, just develop secure code, and you'll be fine. There's no such thing as secure code, right? Like, there's errors, and the underlying language is sometimes Right. Like we see vulnerabilities and everything. So there's not such a thing as secure software. So you have to mitigate, and then be ready to defend against coming vulnerabilities.
Heather Joslyn 3:38
We're hearing a lot about zero trust, what is zero trust,
Aeva Black 3:41
frankly, it's yet another buzzword. But it's the sort of principle it's trying to convey is don't have like a single boundary around your network for zero trust networking, right. The same applies to other domains as well beyond networking, verify everything, and every step of the way. So have a policy engine that's verifying things that have already come through your firewall, for example,
Chris Short 4:02
you know, coming from an infrastructure side, it's for me, it's, you know, trust but verify, right, like, I want to make sure that I'm Yes, I think this is secure. But I want to write a test to, you know, validate that this thing has gone through at how station, this service has gone through all of its, you know, steps and processes. But sometimes the hardest part is creating those steps and processes, and then finding the tooling to implement whatever it is you want. That's another challenge.
Heather Joslyn 4:30
What are some of the most common mistakes you see developers in particular that make the supply chain less secure
Aeva Black 4:36
these days? I think a lot of the mistakes, frankly, is companies, especially small ones or not the big enterprises typically are just pulling software directly from upstream. They trust to build someone's published they don't verify they don't check the hash. They don't check the signature that is download, you know, a Docker image or binary from somewhere and run it in production that exposes them to anything that's true. Just upstream, if upstream has a bug or a network error in that repository, then they can't update as well, companies and developers should have an internal staging environment, you know, pull in, verify Trust, but verify that package you got from upstream before pushing it to production, or even better yet be able to rebuild it yourself, in case you find a vulnerability, and needs to carry that fix in your own environment, while working with upstream to get it merged, that takes more time. So ideally, trust upstream, verify it as you pull it in and have the ability to patch it yourself before moving into your own production environment, and then push that work back upstream.
Chris Short 5:40
And having that whole built environment, actually firewalled, I think is an amazing concept, right? Like, don't let it call out to some things that you know, are not things that calls out to run a DNS log in your environment for a little bit, create those safeguards of, Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen, right? Like not in your production build environment. But also, if somebody does get into that production, build environment, they can't do anything. Right. So if that is compromised, they can only pull from official sources. So they would have to have something ready to go, already hijacking some, you know, package or some name like that namespace of some sort. Just to pull off, you know, a convincingly hard hack of a system that is vital to your implementations of software,
Heather Joslyn 6:33
you something you said about making sure that they patch things if they, you know, they pull into an internal environment, they can patch things is we hear about, you know, the skills gap and that there aren't enough people for these jobs is, is that part of the problem that why things are more dangerous? Now, you know, are there enough people who know how to how to fix things, when they're that they pull from these repositories,
Aeva Black 6:51
we do hear a lot about the skills gap, especially right now, the skills gap in security, right? It's not a lot of programs, teaching cybersecurity, or really security best practices. As Chris said, if it's everybody's job, it's nobody's job. At the same time, every developer, every person who does DevOps, it's their responsibility to understand the stack they're using. And I think we don't really teach as much as you know, those of us who learned a long time ago, the lower layers of that stack, and how to understand whether or not those are secured as well.
Chris Short 7:24
I mean, the lower layers of the stack are vitally important, right, like Eva and I were working on something in Seattle months ago, and we went and pulled an S bomb for a project, the instructions didn't work. First, we submitted a patch. And then when he pulled down the s bomb, there was glaring omissions, right? Like there was things like the language implementations from the standard library,
Aeva Black 7:47
the the s bomb was shallow, right, we only leverage one or two layers down. But we know that projects have dependencies, which have dependency is which they cross a language boundary to other dependencies, and we need to track all that all that
Chris Short 8:01
byte, right, we need to see our languages as Oh, they're also vulnerable to write like, these are part of the standard library as part of our build system, and treat that as something that can be very vulnerable. I mean,
Heather Joslyn 8:15
we've learned with log for j, that whole situation, and what I mean, are there other practices in addition to trusting and verifying, and like testing something out an internal environment that you think more team should be adopting?
Chris Short 8:27
I always say continuous learning is what we do here as a job. Yeah, if you're kind of like, this is my skill set. This is my toolbox. And I'm not, you know, willing to grow past that you're setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, I need to change something across my entire environment. How do I do that it changes your way of thinking about production environments, completely. Yeah, because you're optimizing for this scenario of another Heartbleed. Another log for J, you know, the next thing that comes along, so and we know there will be an X there will always be an x, right? Like Kubernetes has had vulnerabilities. For example, there every project here at cube con has had probably vulnerabilities that they've had to manage at some point,
Aeva Black 9:14
there is no such thing as perfectly secure software. Yeah,
Chris Short 9:17
that's a good point. And I think asking for secure software as a like procurement process point is just laughable, in my opinion, because that doesn't exist. You're asking for something that's not possible.
Aeva Black 9:32
What matters much more as your ability to respond to vulnerabilities and go to identify where they are what's affected and how fast you can ring fence that triage it and apply a patch. That is the critical aspect of security in a lot of these environments is how fast can you respond not is it perfectly secure? Because that doesn't exist? Yeah,
Chris Short 9:53
yeah. Is it perfectly documented? No, you know, that is not right. Like
Aeva Black 9:58
to your to your question. For an organization, not just upstream projects, having the knowledge and skills in house to be able to respond to everything you're using, right? That's the crucial part, not just downloading the package off GitHub or off Docker Hub, or pi pi, and trusting it, but actually have your developers given enough time enough training to know how to fix it, or deal with it. If something is found in that's vulnerable.
Chris Short 10:25
And and your developers are already overwhelmed. Yeah. So you need automation to help you. And you need that automation to have a good signal to noise ratio, which has been the hardest part right now.
Aeva Black 10:36
Absolutely. With everything. It's just and that's bombs are just even more noise. Yeah, yes. Bombs don't
Chris Short 10:41
necessarily fulfill the goal of, oh, there's this vulnerable thing and go Lang, you can't fix it. Yeah, you know, right. Like, you have to wait for the next thing to come along. But if it doesn't even end, identify it. Hey, there's a new version. What good is that? Right? Like, and that's fine tells you what's there, not what you need? Yeah. You know, right. Like, can it give you you know, vulnerable vulnerability information? Not it's hard to pull out, right. Like, it's not necessarily formatted in a great way for action,
Aeva Black 11:10
not formatted consistently, the depth isn't consistent, right. The right now the NTIA, that's a US national body, they've defined the minimum viable elements of an S bomb. And it's a depth of one, right? Yeah, all of our software has dependency as many projects have dependencies, 1020 30 layers deep. And so if your S bomb only goes one or two layers, you just don't have enough information to know if there's a vulnerability five or 10 layers down.
Chris Short 11:37
Yeah, there's nothing you can act on, right. Like the biggest thing for ops teams or security teams is actionable information, right? Great. Something's vulnerable. Where? Right? Like, okay, I know that this software package, where's it installed? It doesn't answer that question, right. Like, there's all these questions that still have to be answered after the s bomb. And after the s bomb is improved upon enough to be actionable.
Aeva Black 12:01
Yeah, that was the biggest feedback I heard from teams across the world around the log for J incidents, right? We knew it existed, we knew the risk people that are trying to find it in their own environments, people were scrambling, and then asking vendors to say, hey, is this product that's that I've racked a physical product? Does it contain log for J? Does it have a JVM? And I don't know. Yeah, so answering these questions to give defense and incident response teams better signal to noise ratio to identify if some component in their system is vulnerable or is affected potentially. That's a big part of our goal. Will
Heather Joslyn 12:39
talking about Esperance brings us to some of the new developments. Last year, Google introduced salsa, the salsa framework. I know you spoke at the Open Source summit on gift bomb, which I want to ask you about in a second. What are some you think are some of the most promising new developments that you've seen in the supply chain security field?
Chris Short 12:58
I mean, to be honest, like, as far as like things I'm happy about. I'm happy about there being some effort towards user education. But there's not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that like, just because I have bash doesn't necessarily mean I have every vulnerability in bash. Right? Yeah. You know, if I have a certain version of bash that's patched and you can identify that. Don't flag it. Like, yeah, you know, the security vendors, I feel like need to do a better job of moving teams in the right direction as far as action. I was at DevOps in Chicago a few weeks ago, and I had an open space called container conundrums. Tell me your pain points around containers. Yeah. And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space, right. So like, there's a lot of complexity that we've got to help people understand the need for it, and the how to implement it.
Heather Joslyn 13:59
And I want to ask you about get bomb here. Can you give us a sense of what that's about? And how that fits into all of this discussion about the supply chain?
Aeva Black 14:06
Yeah, so we're talking a lot about signal to noise ratio, and get bomb is? Fundamentally, I think, the best bet we have to provide really high fidelity signal to defense teams. Right, what we've done, and the name is a little bit weird. We're going to rename it because the first thing people everyone asked me is why do you call it a hit bomb? Yeah, it's not good. And it's not an S bomb. Yeah, right. But what we've done is we've taken the underlying technology that get relies on using a hash table to track changes in a project code over time. And we've reapplied that same technology to track the supply chain of software. So when you take a C library and say open SSL, and you compile it with GCC, and then you link that into project in a different language, and then that gets built into a Docker image all the way up the chain, right, but we're dealing with Git bomb is building a hash table connecting all those dependencies and Bill Then what we call an artifact dependency graph, the end result is you've got a binary, it's an RPM or Docker image, whatever it is, it has a fingerprint the hash in it, that hash identifies the entire artifacts dependency graph, I'm going to build that. So now imagine if mitre right, or some other vulnerability authoring incident, right? They say, Oh, here's the next Heartbleed. And we've traced it back to these source code files, which have these hashes. You don't have to scan through all these, you know, big JSON or XML files for your S bombs, you can just do a GREP. Well, this hash, anywhere in the hash table for the environment I'm running. If it is, doesn't mean you are vulnerable. But it means you need to go investigate, it means that causative source code file was built in somewhere in the supply chain to the product you're running. That's what we're trying to do across all ecosystems, languages, build tools,
Heather Joslyn 15:57
that's great. So it'll be much easier to find
Aeva Black 16:01
high quality signal for defense teams is the primary goal. And there's a bunch of other bunch of other things that it enables as well. That's the main goal. Cool. And that's open source. It's fully open source, we got a team working on it a couple proof of concepts right now. And the main effects I'm hoping to achieve from this is a small change in in every language and compiler, you know, small goals. Yeah, a little change. And like, all of them, as long as I'm consistently, then we can get traceability across the whole supply chain.
Heather Joslyn 16:29
That's great. That's great. Well, and we'll put the link to the GitHub registry. And if I'm not dev website, put that in the in the show notes to this. Just one more thing, open SSH F released a mobilization plan this year, what are your thoughts on on that? Do you what are your hopes for that?
Chris Short 16:45
I mean, I've I've glanced at it, I haven't given it its, you know, full 50 to two page due diligence.
Heather Joslyn 16:51
Short, casual reading.
Chris Short 16:54
As Eva mentioned before, this is good. If you want to go read something before bed, I think with any kind of framework or any kind of recommendation, it always gives people the feeling of, Oh, something else, I have to do it. And if they don't have the tooling in place to like, implement these new things that come out, and that people want you to be, you know, check the box Dom, then you're going to be way behind the power curve. So setting up something to make your security systems and automate those security practices is going to save you a lot of time, when salsa or when something else breaks. You know, if if you can get a build system or some kind of automation system to maintain everything, you can do the same thing with your security infrastructure.
Aeva Black 17:41
And the OpenSSL app as a whole is putting a lot of effort into education, building best practices, building tools to support other communities, there's very little, you know, code or projects that the OpenSSL itself produces. There's a few that primarily what we're doing, and what this sort of the work stream that you're talking about is is largely about is outreach to in coordination with all the languages, all the build tools, the rest of the open source ecosystem, sort of at its building blocks, to help add security, awareness, improve security capabilities of everybody. That's what it's about.
Chris Short 18:20
I mean, the the awareness is the biggest thing, right? Like you don't, you don't go out on a big shipping boat without radar, or sonar, right? Like, you need some kind of visibility into what's going on around you to make decisions about a boat, and your enterprises a lot like a boat these days, it's kind of hard, you know, a big boat that it's kind of hard to turn around. But you need to have that tooling in place to tell you something's out there. Something's below here that you need to look at, or go back to the vendor that made it and be able to find it, right? It's something you find it right away, right? You don't want to run around. No, you don't want to hit the ice in the engine room. It is right. Like, you don't want to hit the iceberg. You want to know exactly what happened and how to avoid it in the future, right. Like, that's the biggest thing. We don't want the whole compartment to flood, we want good stable systems. So we need to put in the intelligence into our systems to make those happen for us in a lot of cases.
Aeva Black 19:17
And that's as true in our open source projects as it is in our companies and products, right. And we need better tooling upstream as well.
Chris Short 19:23
And the Kubernetes community is a perfect example. They are constantly iterating on their release processes the way and how much things are automated, versus you know, human involvement. And we see these debates happening inside, you know, teams all the time. How like, do we actually want continuous integration? Or do we want to just stop right before release and say, push button go. There's a lot of safety in that and you know, feeling, but the fact is, if you've built on all the tests, and you've done it right, you should be able to just go ahead and deploy it automatically. But then there's that oh, did we not test this one use case or is this vulnerable? No, like you This configuration gonna create a vulnerability kind of thing. So there's all this trepidation about just go live well, if you build all the things to safeguard it, you don't have to worry about that.
Heather Joslyn 20:10
Okay, so I guess that's all for now. Thank you very much, Eva and Chris, for joining you.
Aeva Black 20:15
Thanks so much for having us.
Heather Joslyn 20:16
Thank you. And thank you again to our sponsor, AWS. And this has been Heather Jocelyn for the new stack, and we'll see you next time.
Colleen Coll 20:24
Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from datacenters. Globally, millions of customers trust AWS to power their infrastructure become more agile, and lower costs.
Alex Williams 20:43
Thanks for listening. If you'd like to show, please rate and review us on Apple podcast, Spotify, or wherever you get your podcasts. That's one of the best ways you can help us grow this community and we really appreciate your feedback. You can find the full video version of this episode on YouTube. Search for the new stack and don't forget to subscribe so you never miss any new videos. Thanks for joining us and see you soon.
Transcribed by https://otter.ai