Investigate a Sentry Error Spike

For an engineer who just noticed Sentry spiking on a critical endpoint. One prompt pulls the stack, finds the recent deploy that introduced it, identifies the file:line, and pings the right team channel.

Sentry logoSentry
GitHub logoGitHub
Slack logoSlack
THE GRIND

Tracing a spike takes too long

Sentry tells you what's broken. GitHub tells you what changed. Connecting them under pressure takes 10 minutes of tab-switching — and meanwhile the errors pile up.

Jumping between Sentry stack traces and GitHub commit history

Missing the recent deploy in the timeline noise

Team doesn't see the heads-up until someone else reports it

Composio collapses all of this into one prompt — here's what that looks like.

THE FLOW
5 steps · 3 toolkits

Your agent runs it end-to-end.

  1. 01
    Fetch the Sentry issue — stack trace, frequency, users affected
    sentry logo
  2. 02
    List recent deploys in the affected service
    github logo
  3. 03
    Correlate the spike start with deploy timestamps
  4. 04
    Identify the file:line from the stack that changed in that deploy
  5. 05
    Post a heads-up in the team channel with the rollback command
    slack logo
THE PAYOFF

Root-cause traced in 30 seconds

The engineer on-call sees the deploy, the file, and the rollback option immediately. Rollbacks happen in minutes — not after a Slack thread full of speculation.

Paste this into Claude, Cursor, or Codex. It'll install the CLI, connect your apps, and run the task — end to end.