Evolving Our Rust With Milksnake
We have been using Rust at Sentry quite successfully for more than a year now. However, when others have asked us how they can do the same, we’ve previously been unable to advocate for our particular approach because we weren’t totally happy with it. As we continued to expand the amount of Rust code on our side, this unhappiness became a bigger issue and it felt like we should take another look at how we were managing it.
This second look ultimately became the Milksnake Python library, which can (among other things) be used for building Python packages that include Rust code. We use this in production for all our Rust code now and feel confident sharing it with the wider world.
What Is Milksnake
Milksnake is a Python module that hooks into the setuptools system with the help of CFFI to execute external build commands. These commands can build native shared libraries which then in turn get loaded by Python through CFFI. This means that Milksnake — unlike earlier approaches — is not Rust specific and can also be used to build C or C++ extension modules for Python.
What makes Milksnake different than other systems (like the integrated extension module system in distutils) is this particular use of CFFI. Milksnake helps you compile and ship shared libraries that do not link against libpython either directly or indirectly. This means it generates a very specific type of Python wheel. Since the extension modules do not link against libpython they are completely Python version or implementation independent. The same wheel works for Python 2.7, 3.6 or PyPy. As such if you use milksnake you only need to build one wheel per platform and CPU architecture.
What does this mean in practice? Well, look at our symbolic Rust Python Library as an example: it can be installed directly from PyPI without requiring a Rust compiler on Mac and Linux (our supported platforms) and we only need to publish three wheels. See for yourself.
The only requirement from the user is a version of pip that is new enough to support wheels of this format.
A Birds Eye View
For Rust, the general setup we follow looks roughly like this. We:
- build our Rust code into reusable crates
- build a new crate that exposes a nice C-ABI
- use cbindgen (not bindgen, which works the other way round) to generate C headers
- use Milksnake to automatically generate low-level bindings from this crate and headers
- generate high-level Python wrappers around the low-level bindings
Using Milksnake
If you have your crate and headers, then the use of Milksnake is straightforward
and all you need is a setup.py
like this:
from setuptools import setup, find_packages
def build_native(spec):
build = spec.add_external_build(
cmd=['cargo', 'build', '--release'],
path='./rust'
)
spec.add_cffi_module(
module_path='example._native',
dylib=lambda: build.find_dylib('example', in_path='target/release'),
header_filename='include/example.h',
rtld_flags=['NOW', 'NODELETE']
)
setup(
name='example',
packages=find_packages(),
include_package_data=True,
setup_requires=['milksnake'],
install_requires=['milksnake'],
milksnake_tasks=[build_native],
)
What does it do? When setuptools runs it now invokes milksnake tasks. These
are lazily defined in a function, and a specification object is passed which can be
modified. What we care about is declaring a build command which will be invoked
in a specific path (in this case ./rust
where we have our Rust C-ABI project).
Then we declare a CFFI module. This needs a Python module path where the CFFI
module will be placed, the path to the target dylib (here we use a lambda to
lazily return this after the build finished) and the filename to the C header
which defines the functions in the dylib. Lastly, due to a Rust limitation, we
cannot safely unload Rust modules on OS X, so we need to pass the RTLD_NODELETE
flag upon loading it.
Exposing C-ABIs
Writing a C-ABI can be daunting, but with cbindgen and some macros it can be done really nicely. To give you an idea of how our C bindings work, imagine we wrap high level Rust crates in nice C libraries that have an API that makes sense in C and can be consumed from Python.
This means:
- We do not panic. If we encounter a panic due to a bug then we catch the panic in a landing pad and convert it into a failure.
- Errors are communicated through threadlocals to the caller (similar to errno) to make it less inconvenient to automatically wrap. Helper functions are provided to detail if there was an error and what the value was.
- Tracebacks from panics are also stored in a threadlocal so we can attach it to Python exceptions later.
- All errors are given unique codes.
- All types are wrapped. So if we have a
sourcemap::SourceMap
type in Rust we make aSymbolicSourcemap
void type for the C-ABI and internally transmute from one to the other. Types are always unsized in the C-ABI unless they are very simple structs.
We use macros to automatically perform error and panic handling.
The end result gives us something like this:
/// Represents a source view
pub struct SymbolicSourceView;
ffi_fn! {
/// Creates a source view from a given path.
///
/// This shares the underlying memory and does not copy it if that is
/// possible. Will ignore utf-8 decoding errors.
unsafe fn symbolic_sourceview_from_bytes(bytes: *const c_char, len: usize)
-> Result<*mut SymbolicSourceView>
{
let sv = SourceView::from_bytes(
slice::from_raw_parts(bytes as *const _, len));
Ok(Box::into_raw(Box::new(sv)) as *mut SymbolicSourceView)
}
}
ffi_fn! {
/// Frees a source view.
unsafe fn symbolic_sourceview_free(ssv: *mut SymbolicSourceView) {
if !ssv.is_null() {
let sv = ssv as *mut SourceView<'static>;
Box::from_raw(sv);
}
}
}
ffi_fn! {
/// Returns the underlying source (borrowed).
unsafe fn symbolic_sourceview_as_str(ssv: *const SymbolicSourceView)
-> Result<SymbolicStr>
{
let sv = ssv as *mut SourceView<'static>;
Ok(SymbolicStr::new((*sv).as_str()))
}
}
As you can see, we return Result enums here but the macro converts this into raw C
types. The C-ABI just gets a zeroed value on error and can use a utility
function of the ABI to see if an error was stashed into the threadlocal. Likewise
we do not use standard C strings but have a simple SymbolicStr
in our library
that holds a pointer, the length of the string, and optional info like if the string
is borrowed or needs freeing to make building generic APIs easier.
We then just run cbindgen
to spit out a C header. For the above functions
the generated output looks like this:
/*
* Represents a source view
*/
struct SymbolicSourceView;
typedef struct SymbolicSourceView SymbolicSourceView;
/*
* Creates a source view from a given path.
*
* This shares the underlying memory and does not copy it if that is
* possible. Will ignore utf-8 decoding errors.
*/
SymbolicSourceView *symbolic_sourceview_from_bytes(const char *bytes, size_t len);
/*
* Frees a source view.
*/
void symbolic_sourceview_free(SymbolicSourceView *ssv);
/*
* Returns the underlying source (borrowed).
*/
SymbolicStr symbolic_sourceview_as_str(const SymbolicSourceView *ssv);
Note that we did not to specify [repr(C)]
for SymbolicSourceView
in the Rust code.
The type is unsized and we only ever refer to it via pointers. With C representation,
cbindgen would generate an empty struct which has undefined behavior in the C standard.
Low-level CFFI Usage
To get a feeling for how we use these functions, it’s key to know that the basic
C functions are simply exposed as such in Python. This means we make high level
wrappers for them. The most common such wrapper is our rustcall
function, which does
automatic error handling:
# this is where the generated cffi module lives
from symbolic._lowlevel import ffi, lib
class SymbolicError(Exception):
pass
# Can register specific error subclasses for codes
exceptions_by_code = {}
def rustcall(func, *args):
"""Calls rust method and does some error handling."""
lib.symbolic_err_clear()
rv = func(*args)
err = lib.symbolic_err_get_last_code()
if not err:
return rv
msg = lib.symbolic_err_get_last_message()
cls = exceptions_by_code.get(err, SymbolicError)
raise cls(decode_str(msg))
def decode_str(s, free=False):
"""Decodes a SymbolicStr"""
try:
if s.len == 0:
return u''
return ffi.unpack(s.data, s.len).decode('utf-8', 'replace')
finally:
if free:
lib.symbolic_str_free(ffi.addressof(s))
As you can see, we haven’t shared some of the functions we used here with you yet, but you probably get the idea of what they look like. The entirety of the system can be found on github.
Highlevel CFFI Wrappers
Out of this low-level API we can build higher level wrappers. We try to anchor the lifetime of objects in the Python space and invoke the deallocation functions on the Rust/C side that way. For this we typically use a basic object type:
class RustObject(object):
__dealloc_func__ = None
_objptr = None
def __init__(self):
raise TypeError('Cannot instanciate %r objects' %
self.__class__.__name__)
@classmethod
def _from_objptr(cls, ptr):
rv = object.__new__(cls)
rv._objptr = ptr
return rv
def _get_objptr(self):
if not self._objptr:
raise RuntimeError('Object is closed')
return self._objptr
def _methodcall(self, func, *args):
return rustcall(func, self._get_objptr(), *args)
def __del__(self):
if self._objptr is None:
return
f = self.__class__.__dealloc_func__
if f is not None:
rustcall(f, self._objptr)
self._objptr = None
Below is an example use of this:
from symbolic._lowlevel import lib
class SourceView(RustObject):
__dealloc_func__ = lib.symbolic_sourceview_free
@classmethod
def from_bytes(cls, data):
data = bytes(data)
rv = cls._from_objptr(rustcall(lib.symbolic_sourceview_from_bytes,
data, len(data)))
# we need to keep this reference alive or we crash. hard.
rv.__data = data
return rv
def get_source(self):
return decode_str(self._methodcall(lib.symbolic_sourceview_as_str))
Here an object is created through a class method. Because we track memory from the Python side we need to ensure that the Python object holds strong references to the input data (in this case the bytes). Otherwise we will crash in Rust once the Python bytes have been garbage collected.
Outlook
As you can see, there is still a lot of work that goes into making all of this function, which may lead you to ask: can this be automated? We hope to be able to do so at some point, but right now we we don’t yet want to go down this path for a wide range of reasons. The main one being that we would likely be depending on highly unstable interfaces in the Rust compiler and we’d prefer to wait for all of this to stabilize first.
That said, there are competing systems in the Rust/Python world which all require linking against libpython. Obviously, this makes things significantly easier as far as data exchange between Python and Rust goes, but the complexity it adds to the build process (mainly the number of wheels we would need to build and that we would not get PyPy support) is not worth it.