How Sentry’s Seer AI Agent passes legal review: a guide for legal teams reviewing Seer

ON THIS PAGE
- If we were asked to evaluate Seer, we would approve it. Here’s why.
- How does Seer stack up against our own assessment criteria?
- Conclusion
(written by lawyers for lawyers but probably also comforting for those interested in using Seer but who don’t have lawyers)
If your legal department is anything like ours, you’re being inundated with requests from the business to use more and more AI tools. Whether it's developers wanting to use coding agents like Cursor, to security implementing AI-driven investigations, to sales and marketing leveraging AI for call insights and competitive research, we've seen a shift in what teams are trying and buying. Even where the primary functionality of a service is not AI-powered, almost every service today has AI-powered features.
The business knows at some level that there are risks associated with AI, but it feels immense pressure to use it or be left behind. You feel the same pressure but also have a mandate to protect your company’s data and intellectual property, as well as the data and intellectual property that you may be holding for your customers.
If you’re being asked to provide a legal review of Seer, the AI agent that Sentry has been building to power our AI and ML features (including Issue Scan and Issue Fix), the purpose of this post is to demonstrate that Seer passes the same criteria that we, as the Sentry legal team, use to evaluate third party AI tools.
So what does legal actually care about?
In order to protect our (and our customers’) data and intellectual property but continue to support the evolving needs of the business, we’ve aligned on the following requirements for the use of AI tools at Sentry:
Limited Data Use Rights: This means contractual commitments from suppliers that data will only be used to provide the service to us (i.e., data won’t be shared with third parties or used to train models without our express consent).
In-Product Data Controls: Tools that use our data to train models should have scalable administrative (not just user-level) controls such as opt-ins to enable training or opt-outs to disable training.
Consistent with our Customer Commitments: Anything that will touch our customers’ data must align with our own commitments to customers, including data deletion/retention policies that match our own, data storage location that supports a choice of U.S. or EU data regions and appropriate subprocessor Data Processing Agreements and Business Associate Agreements.
Audited Security Controls: Suppliers must have audited security controls such as SOC 2 Type 2 and ISO 27001.
Transparency: Suppliers are clear about how they are interacting with our data.
AI checklist
We’ve even put together a checklist that we use for assessing these requirements. Click the button below to get the downloadable version for you to use in your own review.
Since we often struggle with finding information on our suppliers’ AI features, we’ve included links below with more details about the specific controls that we have enabled.
Principle | Controls | Assessment of Supplier |
---|---|---|
Limited Data Use Rights | - [✔️] Supplier’s right to use customer data is limited to only providing the service to us - [✔️] Express prohibition on use of our data for model training or product improvement without our consent - [✔️] Express prohibition on sharing output based on our data with third parties without our consent | - Right to use data covered in Terms of Service. - No use of Service Data to train generative AI models or to share output of generative AI features with third parties without customer permission per Service Data Usage policy. - Service Data Usage policy describes how and when Sentry can use Service Data for product improvement. |
In-Product Data Controls | - [✔️] Product includes admin controls to disable model training - [✔️] Product includes admin controls to disable AI features - [✔️] Product includes opt-in controls to use customer data for model training - [✔️] AI features are not on by default and require additional enablement/opt-in | - Service Data Usage policy describes how customers can update their settings to control use of Service Data.. - Sentry offers an admin level org-wide opt-out to disable all generative AI features. - Seer features require users to take an affirmative action before Service Data will be processed by the features. |
Consistent with Our Customer Commitments | - [✔️] Supplier’s data deletion/retention policies align with ours - [✔️] Supplier’s data locality commitments align with ours - [✔️] Supplier has DPA - [✔️] Supplier has subprocessor BAA - [ ] Supplier indemnifies for third party IP infringement claims based on output of generative AI features | - Data is stored within Sentry’s infrastructure in the customer’s chosen data storage region (U.S. or EU). - Data is subject to our standard data retention policies. - Sentry is GDPR compliant. - Sentry has subprocessor DPAs and BAAs in place with the ML infrastructure providers that host the models used for Seer. |
Audited Security Controls | - [✔️] SOC 2 Type 2 - [✔️] ISO 27001 - [✔️] Annual penetration test - [ ] Fedramp - [✔️] HIPAA security rule | - Sentry is SOC 2 Type 2 and ISO 27001 certified. - Sentry conducts annual penetration tests and makes those available to customers via their accounts. |
Transparency | - [✔️] Supplier publicly documents how our data is used with AI features | - Sentry builds in the open so customers will always be able to validate that our code matches our stated commitments on our use of Service Data. - Sentry publishes documentation on Service Data Usage. |
You may have noticed that we would approve Seer even without an IP infringement indemnity for AI-generated code (which we don’t offer).
This is because we believe the risk of IP infringement from code generated by Seer is low, as described below for the four main categories of IP: patent, copyright, trademark, and trade secret.
Patent: Seer only provides fixes to code you have already written. It is very unlikely that the fixes that Seer generates would be such a big change from the underlying code submitted to Seer that they would introduce a significant new risk of infringement. Your code is likely to be either infringing or non-infringing before you send it to Seer to fix, and that is unlikely to be changed based solely on any code generated by Seer.
Copyright: The code that Seer generates (or can generate) is fixes that are functional in nature. Since copyright protects expression, not function, we believe the risk of copyrightable code being generated by Seer is low. This is not like other code gen tools where you are asking AI to write code based on your ideas. With Seer, we are only fixing code that you already wrote.
Trade Secret: Any code that Seer generates comes from a third party model. Any code that model was trained on that is surfaced by Seer is no longer afforded trade secret protection by definition because it is no longer secret.
Trademark: Not applicable because there are no brand implications with software code in general.
Every legal department obviously needs to make their own assessment based on the specific circumstances of their company.
That being said, we believe that our checklist is a sound and reasonable one that most of our peers would agree with.
We hope you find it useful in processing your queue of AI tools requests, and that Sentry’s Seer makes it to the top of your approved list.