• Looking Back on 2016

    2016 was a big year for Sentry. It continued a test to see if we could turn a small idea into a big vision. Just a year prior there were only two of us with an overwhelming audience to support. We finally started to consider the potential and with that vision, began making our first hires. The last year was a continuation of that expedition. We built the team to an amazing 25 people while growing our footprint by an order of magnitude. Hundreds of thousands of developers have put their trust in Sentry to help them continuously ship software. The future is all about more of the past and executing on the trust you’ve given us.

  • Looking Back on 2015

    In January of 2015, Chris and I sat down and decided it was time to commit to Sentry (no pun intended). We opened our first office here in San Francisco, hired the best people we knew, and set out to take Sentry to an entirely new level. Let’s take a look at what happened in 2015.

  • Buffering SQL Writes with Redis

    It’s no secret that one of Sentry’s core technologies is SQL, specifically PostgreSQL. We’re huge advocates of simplicity, and Postgres is one of those tools that’s not only quick to get started with, but can also grow with you. While at our scale very few things are simple, we’ve still managed to keep complexity to a minimum.

  • What is Crash Reporting?

    Crash reporting is a critical programming best practice. However, if you’ve never been exposed to the concept before, it can be tough to understand how it works and why it’s valuable. Here is how we look at crash reporting at Sentry.

  • Monitoring the Monitor

    At Sentry we aim to make crash reporting pleasant. For us, that means you don’t need to dig into sparse logs to determine what’s going on. The entire the reason the project exists is because that problem had gone unsolved, and we had experienced how painful it was. Unfortunately this leads us back into the hole ourselves, as the battle with recursion means we can’t always rely on Sentry to monitor Sentry. Our problems are also a bit more complex than most SaaS services since we ship an On-Premise solution as well. This means we need to support monitoring in a way that carries over. So what do we do?

  • Internationalization and React

    It’s always nice if a project outgrows yourself in a way. This happened for the first time in Sentry a long time ago when translations kept rolling in for languages none of us spoke. This was enabled by the excellent gettext-based internationalization support in Django, and the ability to collaborate on through Transifex which is an online tool where people can contribute translations and discuss the strings and raise issues on them.

  • rb: A Redis parallelization toolkit for Python

    We love Redis at Sentry. Since the early days it has driven many parts of our system, ranging from rate limiting and caching, as well as powering the entirety of our time series data storage. We have enough data in Redis that a single machine won’t cut it.

  • Rethinking Sentry's Documentation

    If you have searched for the Sentry or integration docs lately you might have noticed that some things have changed. There are now consolidated docs for Sentry and the raven clients right at docs.sentry.io:

  • Transaction ID Wraparound in Postgres

    On Monday, July 20th, Sentry was down for most of the US working day. We deeply regret any issues this may have caused for your team and have taken measures to reduce the risk of this happening in the future. For transparency purposes, and the hope of helping others who may find themselves in this situation, we’ve described the event in detail below.

  • Driven by Open Source

    Seven years ago I would frequent an IRC channel setup for users of the Django web framework. Like an old-fashioned Stack Overflow, it was a mix of people asking questions and others answering. At some point, someone asked how to log exceptions to the database. While not understanding, it seemed not overly difficult and I helped come up with an example. Shortly afterwards I took that example, threw it into a repository, and committed the first lines of code to what would eventually become Sentry.

  • Continuous Deployment with Freight

    Early on at Sentry we set up Jenkins to automatically deploy after a passing build. Eventually we wanted better control over where we were deploying, and when (i.e. branch FOO to staging). To solve this we started with simple parameterized builds, and effectively had something working. Unfortunately when it came down to adding external controls we hit the age-old API issues within Jenkins itself.