Working at Cased
Cased is an engineering enablement platform focused on making production work better. It combines a web-based terminal, approval workflows, and runbook automation to improve day-to-day DevOps and support work. Cased also features detailed logging and audit features, to help with compliance.
How engineering works at Cased
How are the teams structured?
We're a small, early stage startup with an engineering team of five. Right now we work as a single team - you can think of it as one pod where there's a lot of collaboration, both between frontend and backend folks. As we grow, we're likely going to split the teams on the functional level, let's say, backend infrastructure and front end. When we hit about eight engineers, we're going to start dividing it up a little bit, by functional role.
What tools do engineers use?
- Design and Prototyping: Figma
- Issue Tracking: Linear and GitHub
- Source Control: GitHub
- Development Environment: GitHub Codespaces
- CI Pipeline: GitHub Actions
- Internal Documentation: Notion
- Communication: Slack
- Monitoring: Prometheus and Grafana
Can developers pick their own tools?
Many engineers here are on different operating systems, we have one on Windows, we have two on different cuts of Linux and a few on Mac. For development, we encourage Codespaces only because it's easy, but you don't have to use it.
We've created a preferred, easy path (centered on Codespaces) that is probably going to be the best, but if you want to use different tooling locally, you can and we'll help make it work. There's no organizational restriction on what tools you can use - we're always open to new tools, if people want to introduce them, they can just start using them and see if anyone else wants to use them.
From a tooling standpoint, we want to organizationally support what we think is a best practice dev stack, but allow people to do more. We think that if the tooling that we're officially supporting is compelling enough, that will be the one people want.
How does the development process work? What's the process for working through bugs, features and tech debt?
Ben, one of the co-founders, wears many hats - product design, product management, product everything, future direction etc. He usually designs things in Figma, with different levels of fidelity. We tend to do some very basic “figuring stuff out” design work in the early stages. We think that this idealized model where a designer is someone that comes up with a design, hands it off, and then it gets built just isn’t true. What we've found is that the “building” is not necessarily that hard - most people can write code. Most of the hard work is actually spent figuring out feasible solutions.
Engineers start getting involved pretty early in the product development process, a lot of that time is spent going back and forth with product design in figuring out what's actually possible, what is this going to look like. We intentionally create loose high level initial specifications, and then we go back and work from there.
During development, customer bugs will almost always bump to the top of priorities - part of that is because we have, at the moment, a relatively low bug volume, so that allows us to prioritize those. In terms of product priority, we have an overriding, year-out roadmap - it's higher fidelity over the next month or two, slightly lower precision over three, four months and so on… We prioritize features that prevent a customer from doing something that they'd be expected to be able to do in the product right now. ‘Nice to have’ features are logged in the roadmap. If we're talking with a customer who has a particularly interesting feature request that seems not just like a nice to have, but natural pain point, we'll then bump that up.
How does code get reviewed, merged, and deployed?
And at this point, any engineer can review and approve PRs, but generally the expectation is the person who knows this particular area of the code the best will cover it. Our reviews are mostly focused on looking at logic questions, a little bit of proper factoring of code. We try to emphasize certain best practices for folks who are either newer to Python or inexperienced with SSH. When things get approved, they'll be merged into the main branch.
We then will generally cut a quasi-staging release, which we test extensively internally and then we'll eventually cut a release . A lot of processes exist because of an interesting thing, which is a little unique for us in the current world, which is that our software runs on-prem (for security reasons), so we're not deploying to customer environments 10 times a day. But we may be deploying to our own environment several times a day and testing there. One of the consequences of that is it does force us to do a lot of internal QA on our own stuff. We use our own product extensively, for our own actual SSH, we're always using a staging or development branch. Once we've acquired enough confidence in our own actual usage of our own tool internally, then we'll cut releases.
What is the QA process?
We don't have a dedicated QA team, we may eventually have that in the future. We all do QA for everyone, engineers are expected to do their own QA, both in their own development branches and then once things are deployed. Because we are internally using our platform constantly, we end up testing a lot.
What are some recent examples of interesting development challenges solved by internal teams as part of building the product?
Being on-prem makes it difficult to reproduce a bug and is a little tricky for us. And so, one of the things that we've solved, this is a mix of a development and a production tooling, is something as simple as getting crash logs - when a customer faces a problem, they get a pretty detailed trace back and log, which they can then email to us. That has actually solved a lot of our problems and we're able to then try to reproduce stuff.
Most of our challenges are working in a heavily on-prem environment and figuring out how to do that at a level that we think most of us expect from cloud development where you have total visibility into everything that's going on. Being able to do that in a way that is secure and compliant, especially given the nature of our product, is interesting.
How does on-call work?
The founders are on call 24/7 - we think this is something that most founders need to do for at least a bit. Then we have additional escalation levels. That's going to evolve as we hire more engineers of course.
Hiring process at Cased
How does the application process work? What are the stages and what is the timeline?
In the past, we have experimented with technical screeners - sometimes they're useful, sometimes they're not. We're currently not doing them, but we have used them in the past and we may go back to them. After an initial technical screening call, there's a second more technical interview with one engineer where we go over experience, work style, problems you've faced, and other things like that.
Then we have an additional pairing session, usually done on Codespaces - it involves actual problems that we've recently worked on. Usually two problems and how you might solve them. We extract a small piece of our actual code base and create a slightly more isolated version of the app, that allows people to work. Applicants will share screen, code in pairs, work through problems, maybe write some tests. And then we try to make sure that the candidate can speak to all the team members; not just the engineers, but folks who are even on the go to market side, just to give everyone a good sense of the person.
What is the career progression framework? How are promotions and performance reviews managed?
We spend a lot of time asking folks one on one, do they want to go eventually into management.. Some do, many don't. We're going to eventually end up with a classic managerial track, but where you're not committed forever to one. We do think building up competencies by mentoring people in either of those directions (IC and management), because they are very different career directions. We expect to start with two or three types of more senior positions, which we do already have but it isn’t refined properly. And then as we grow, the granularity of that will increase. We don't necessarily think the concept of leveling needs a huge amount of innovation. Right now, there's so much communication between team members that we're able to do performance reviews just as everyone works.
Console is the place developers go to find the best tools. Each week, our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to.