Becoming a data leader with Mike Cohen, Head of Data at Substack
Welcome to the Canvas Podcast, where we bring data and business leaders together to talk about how to make data easier for everyone. Today I'm super excited to have Mike Cohen, the Head of Data at Substack, on the show.
Do you want to start by telling us about yourself?
Thanks for having me. Yeah, my name's Mike. I'm the Head of Data at Substack. I've been working at Substack for a little over three years now. I started when we were a team of seven people, and now we're around 90 people.
How did you get your start in data?
I have perhaps had a little bit of an unorthodox path into data. I started my career a long time ago in economic consulting. I worked in that industry for about four years after college and then realized I wasn't meant for that industry.
So I went to business school to try to find a new industry. Ended up finding tech and interned at Venmo, working on Growth Engineering. This was back in the summer of 2013 when Venmo wasn't quite the household name it is today. Did a lot of really cool projects there and got a taste for it.
Went back full-time to Venmo after business school. Started more in a traditional, post-MBA role working on Strategic Business Operations. But I kept feeling like I was hamstrung by not being able to get in and work with data and answer my own questions.
I began figuring out how to be a technical person at Venmo. I started by learning SQL, Python, and R. I was learning that in my free time in the evenings and then come in the next day and try to apply that to my work at Venmo.
Ultimately, I started incubating and building out the first analytics team at Venmo. And from there, it was off to the races. After that, I worked at a few different companies, starting with a small consumer startup called Fin. I think of my time there as a sort of Masters's Degree in software.
I started to see how data and software worked together. And that was a pretty full-stack role. So a lot of analytics-type questions and things, data engineering, and some machine learning. That was a pretty cool formative experience. Even if the company didn't work out, I learned a lot.
From there, I went to work at Affirm where I was working more on the machine learning side, working on the credit model, which is their bread-and-butter product.
Then I hopped around a bit doing some more data engineering stuff before ultimately landing at Substack around three years ago, which has been like going back to basics, starting with thinking about how to set up data infrastructure for a high-growth company. It’s been pretty great.
How'd you teach yourself concepts like SQL, Python, and R? Any courses you recommend?
I used SAS at my consulting job and had some Matlab experience from undergrad. So I had some familiarity with the ideas behind programming. There were some bridges to traverse, but it wasn't insurmountable. I would read stuff at night and took a course on EdX on Sabermetrics, basically baseball statistics.
I'm not a huge baseball fan per se, but I understand the concepts of, On Base Percentage, Batting Average, and things like that. It was a good way to learn SQL in an applied manner that made sense to me. I didn’t want to take a course about how many animals there were in a zoo.
I also did one of the Coursera courses on R, but I can't recall which one. And then, after that, it was just being in a position at Venmo where I could use the tools I was learning.
You’ve worked at FinTech, Hardware, and Consumer startups. Any particular things that you look for before joining a company?
It's been an iterative process in figuring out what resonates most. When I started at Venmo, it was like 30 people. I stayed there until it was around maybe 120.
Then I was at Fin from about 15 to 70 people. Affirm was like 500 people; when I left, it was 800. Then I did a stint at PAX, a hardware company with about 300 people.
And basically, what I learned along the way is a couple of things. One, I prefer the smaller companies first and foremost. I think there's something special about working in a place where you know a little about every person in the company. You recognize faces and names and know little about what they do. I find that quite enjoyable.
At a small company, you can also move fast. You can have an idea on a Monday about something that might be cool to see or do. And, in crazy cases, you can have an MVP the same day, or in the worst case, you could have an MVP by the end of the week.
That's a lot harder to do the bigger the company gets, based on what I've seen for numerous and valid reasons. The blast radius of making a mistake gets bigger and bigger. Or there are just more legal ramifications or consequences.
So there are valid reasons for slowing down, but it's less exciting. I like variety in what I do daily and just the manic ups and downs of startup life. And then, regarding software versus hardware, I thought that was an eye-opening experience to work in a hardware company.
Hardware is much slower. There are supply chains and smaller devices, so you’re shipping fewer and fewer changes. You have to wait until the hardware is rolled out fully. It's a very different life.
I’ve found that I just enjoy smaller companies making a product that I enjoy using.
How did you prioritize things as a company's first data engineer or first data scientist? And how did this change once you were in a management position?
Data had already been considered when I got to Substack. The founders and early engineers were pretty good on the data front, so I wasn’t starting from scratch.
That said, there was some stuff to jump in on and start doing from the outset. For one, we didn't have any data warehouse infrastructure. We were running only on two different Postgres databases. The first thing I did was to figure out which data warehouse we would go with.
We ended up choosing Snowflake and setting up data pipelines to Snowflake. Then we got a BI platform and ended up going with something we were familiar with and still pretty happy with.
Then we began setting up the transformation piece of our stack. And I had built something on a previous job at PAX, and it worked pretty well and was pretty flexible. And so I reconstructed that at Substack with some lessons learned mixed in.
We moved very quickly on setting up infrastructure while simultaneously digging into questions, trying to understand the business, and helping the various teams and stakeholders.
Any lessons learned on the transformation front that you'd be willing to share?
We have a system that lets us do similar things to dbt. It doesn’t have the bells and whistles, but it lets us define transformations primarily via SQL in a DAG referential order, such that things build on a schedule in a sequence and are very easy to construct everything like configuration based, define a query schema and everything else just happens automatically from there.
What's a nice thing about what we have, too, is that we use the data warehouse, we use the power of Snowflake to be our compute engine, and then for some things that we want to subsequently serve back in the product where something like an index would be beneficial, there's a way to very easily add to your configuration for your table to then send that table back out to a separate Postgres database with a couple of lines of code. You can just say, hey, send this to Postgres, here are the indexes. And that's been quite convenient for us.
How has your mindset shifted now as a team leader? How do you think about scaling the team?
It's been a fun, challenging, yet rewarding part of the job. I did a solo show for the first year, and then we added one person a couple of months after that and then a few others and then lost a few, so I was back on a hiring train for some time. We’re now at a good equilibrium state, and the way I've been thinking about it is as our product organization has evolved, I want to have one person working on each of our product priorities.
Nothing revolutionary in that idea, but that's how we've been thinking about it. And the rate at which I imagine that we'll continue to scale the data team is proportional to the number of product teams that we're running where the product teams themselves are then composed of a data person, a designer, two sets of engineers, a product manager and an engineering manager.
What has the North Star metric been at Substack? Has it changed since you started?
It has evolved somewhat, but not too much. Substack is a way for writers to earn money for what they’re writing. We think what you read matters, and writers should be paid for their work. So for us, the North Star has always been Annual Recurring Revenue (ARR). Essentially how much are writers on Substack earning? And our business model is very tightly aligned with that number.
So for us, it feels like a win-win. We track, monitor, and look at plenty of other metrics holistically. But that's our key - we do well if writers do well.
We talked about your new Notes feature before the show. How does data inform the roadmap at Substack?
Yeah. I think it's still very early in the early innings. I said I didn't like baseball, and I just made a baseball reference to early innings for Notes. But yeah, we see that as a way for writers to connect with their audiences on a deeper or more casual level.
And also, it's a very interesting way for readers and writers to discover one another, right? It's more of a short-form place, but as you scroll it or see what people are sharing or reading, it's like a very easy way to check it out, and if you like it, subscribe. It's also just a kind of cool way to see people who you look up to.
For example, there are some basketball writers that I read everything they write, and then it's cool to just see them on Notes and like or share what they’re writing. It’s a pretty cool experience to have the writer like or respond to you, too.
What’s the data culture you try to foster?
We’re a pretty involved data company. I think it helps, for starters, that two of the three founders are engineers by background and trade, so they are fairly self-sufficient in data and can dig around, build dashboards, and ask questions.
So it starts at the top and then trickles down throughout. Maybe half the company can do stuff in SQL, and we have some no-code ways to explore the data for the other half.
Within each team, we organize or think about work into sprints and like to opportunity size different things. Hey, this is a thing we should do. What's the opportunity size? Meaning what's the impact we'll get if we do something like that on our team?
After we size the impact, the process then gets started. The designer takes the baton. The engineer takes the baton. Okay, great. How are you gonna measure it? Data takes the baton. Okay, great. Let's launch it. And then Product takes the baton and doesn’t quite celebrate but monitors to see how things are going along with data.
We have to ensure that Data is at the table when things are being decided. Or at least certainly like when we're figuring out how to measure success for the thing, hopefully, earlier rather than later.
Do you have any frameworks or suggestions on the opportunity sizing?
I don't think I have anything particularly novel or unique. Based on the thing like, how many people are visiting this page per week? Okay. What if we put a button here? What percent do we think would click on it? And what would that lead to? So there's a lot of estimation in guessing that goes into it.
But the first person, first principles, part of it is the current state of the world and, what's a reasonable thing we think we could get this to? And how would that translate, in our case, to ARR for writers?
What problems has the Modern Data Stack solved, and what problems remain?
A lot has changed in my short time in data, and I think the most amazing thing that has happened is probably just having a data warehouse and running pretty crazy queries on pretty big datasets.
I didn't have too much experience in the pre-Redshift, pre-Snowflake era. I'm happy in my blissful ignorance that I don't know what that's like.
It's pretty nice to work with the size and scale of data that we see today in a way that's pretty easy overall. It’s just cool that SQL is just such a powerful and versatile language.
The best thing I ever did was learn to become very good at SQL. I think that's probably the skill that's served me the most and continues to serve me the most daily.
As for predictions, I think some interesting things could still happen in the data warehouse space. I think the lack of indexes is an interesting thing.
I know Snowflake, for example, and I'm not trying to overly plug them, but I know there are a few things in their pipeline or world that are pretty interesting. They like hybrid tables. Things seem like an interesting way to merge the power of the data warehouse with the power of fast lookups.
And yeah, I think that something like that will be on the horizon and useful for many people. And then, yeah, the generative AI stuff, I don't know much about. I am at the nascent stages of understanding, and it's cool. And I'm waiting on the sidelines to see what applications work, and I’ll be eager to try those too.
Where can people learn more about you and Substack?
Check out the new Notes feature on Substack. It’s super low friction and easy to discover and explore people’s work. And you can find me on Substack and Twitter.