Share on Twitter
Share on Facebook
Share on HackerNews
Share on LinkedIn

Shipping Clean Code at Sentry with Linters, Travis CI, Percy, & More

Shipping clean, safe, and correct code is a high priority for engineering at Sentry. Bugs are best discovered before they hit production because afterward they have real user impact and can drain even a high-performing team’s resources quickly. The later in the development cycle a bug is found, the longer it will take to fix. (See research conducted by NIST.)

Here at Sentry, we are strong proponents of fixing bugs as early as possible. Here are the specific tools and practices we use to do that:

  • Linting and Autoformatting
  • Automated Testing
  • Visual diffs of UI changes
  • Measuring test coverage
  • Detecting package vulnerabilities

Continuous Integration & Deployment

Before we get to the tools, it is important to mention precisely how continuous integration (CI) and continuous development (CD) help us ship better code.

Small code changes contain fewer bugs and promote better code reviews, which help you identify issues early. CI, the practice of keeping code changes small and merging them into the mainline frequently, helps us modernize development, mitigate risk, and increase observability.

CD, the practice of deploying changes to production frequently, reduces both the amount of new code shipped with a release and the time it takes for a change to go live. These factors lead to fewer bugs with each deploy and faster detection, thereby reducing the complexity and duration of investigating, triaging, and fixing the bugs.

This post focuses on the tools we use in our CI process — we’ll discuss CD in a future post.

Linting and Autoformatting

Clean, readable, and consistent code not only makes writing bug-free code easier, but also allows your reviewers to focus on the real issues as opposed to being distracted by those that are style-related.

Sentry uses a combination of linters and auto-formatters to create that clean, readable, and consistent code. Linters are tools that expose syntactic and semantic errors in your code. Auto-formatters are tools that automatically format your code, eliminating debates about code style, ensuring consistency, and being more efficient than hand-formatting.

See them in action below:

ES6 Linting

ES6 Linting

Prettier Auto-Formatting

Prettier Auto-Formatting

Linters and auto-formatters can be integrated with your IDE/editor, so you get real-time feedback as you code. Some people prefer to run them as pre-commit hooks (run each time you commit a change locally) so that they can code distraction-free, and run it only when they’re ready to commit.

At Sentry, we use the following linters and formatters:

Language Linter Formatter
Javascript ESLint prettier
Python flake8 autopep8
Rust Clippy rustfmt

Automated Testing

CI becomes really powerful when coupled with automated testing — your code is built and tested automatically whenever you open a pull request (PR). If all tests pass, you can have a high level of confidence that the change is safe to merge. After merging, tests are run once again to check if the particular commit is safe to deploy.

Automated tests give you immediate feedback, allowing you to fix issues faster when the context is still fresh in your memory, rather than later when you’re in the middle of something else. Running tests on every change also means that it is easy to identify which commit caused tests to start failing.

Sentry’s automated tests are run by Travis CI. When the results are ready, Travis CI posts them to the PR on GitHub. Automated tests are especially important for us because we support a large number of programming languages and are open-source, so we need to test against several platforms and environments in parallel. For example, server changes are tested against three databases (MySQL, SQLite, and Postgres), and the Python SDK against 24 combinations of different versions of Python, Django, Flask, etc.

Travis CI test results

Travis CI showing test results for various commits

Visual Diffs on UI changes

How do you ensure your change doesn’t modify the UI in a subtle, but unexpected, way? For example, try spotting the difference between these two screenshots:

Sentry UI comparison

Percy is a tool that compares your UI before and after a code change and shows you any visual differences in an intuitive way. For example, here is Percy in “diff” mode showing you that there was an extra “Auth” item in the left sidebar:

Percy diff mode

Our test suite includes tests that run in the browser using Selenium. The markup and CSS rendered by the browser are uploaded to Percy, which re-renders it in its own browsers at different resolutions, captures screenshots of the results, and compares them pixel-by-pixel to a known good set of screenshots. If you approve any changes, Percy begins using the new snapshots as the baseline for future changes.

Automated visual diffs have a number of advantages over manual inspection:

  • More efficient
  • Helps catch regressions triggered by a change in unrelated parts of the product
  • Shows you exactly what changed, removing any guesswork
  • Catches subtle, unnoticeable changes, like a margin change from 1px to 2px

Percy is not always 100% correct, and sometimes gives false positives where elements are shifted a bit here and there, even when there is no attributable change in the markup. However, the false positive rate is low, and overall, it’s a very useful tool.

Measuring Test Coverage

Everyone knows having tests is good, but measuring your test coverage is even better. Coverage is the ratio of the number of lines of code executed by your tests to the total number of lines of code.

Travis reports our test results to Codecov which computes this number and also shows a detailed line-by-line analysis:

Codecov results

green: executed, yellow: partially executed, red: not executed

Codecov is useful for a number of reasons:

  • To enforce coverage on critical parts of the code like billing.
  • To provide a long-term indicator of test-health for non-critical parts of the codebase.
  • To ensure that new tests actually exercise the code they’re meant to test (we’ve all been there — discovering a test that passes because it doesn’t actually test anything).

Detecting package vulnerabilities

With so much software freely available today, it is natural that a good portion of your code uses open-source libraries and packages. With that comes the risk of inheriting security vulnerabilities in those packages. A recent example is the famous Heartbleed bug which was a vulnerability in the OpenSSL library that put practically every web service on earth at risk.

At Sentry, we use Snyk to alert us of vulnerabilities in Javascript packages as those vulnerabilities are discovered. Snyk integrates with GitHub to fail tests if a PR introduces a known vulnerability, and also automatically issues pull requests when a fix becomes available, by patching the dependency or upgrading its version.


The best kind of bug is one that never ships. In the last few years, there have been dramatic improvements in tools and processes for improving software quality pre-release, and many of these tools are customizable, with respect to the checks and integrations they provide. When used in combination, they can greatly reduce deployment anxiety (and if you did break something, there’s always Sentry).

Attending GitHub Universe next week? Join Sentry and Travis CI at our GitHub Universe party on Tuesday, October 16th! RSVP here.

Your code is broken. Let's Fix it.
Get Started

More from the Sentry blog

ChangelogCodecovDashboardsDiscoverDogfooding ChroniclesEcosystemError MonitoringEventsGuest PostsMobileOpen SourcePerformance MonitoringRelease HealthResourceSDK UpdatesSentry
© 2024 • Sentry is a registered Trademark
of Functional Software, Inc.