Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception Notifications in Slack #183

Closed
wants to merge 2 commits into from
Closed

Conversation

saneshark
Copy link
Contributor

Currently dev and staging environments suppress the backtrace from exceptions raised because that is the preference for production. But for debugging, it is incredibly helpful to have this backtrace. In lieu of a mechanism for querying logs in a realtime manner, this is a good compromise for now.

This PR can be merged irrespective of https://github.com/department-of-veterans-affairs/devops/pull/685 but will not start to track the exceptions in slack until those ENV variables are present.

@earthboundkid
Copy link
Contributor

In past jobs, we have used Sentry for this. Slack seems like a less useful choice because Sentry is designed for the job and can customize how many emails get sent, store related stack traces together, etc. What services for error reporting have we looked at? I believe CloudWatch has been mentioned.

@markolson
Copy link
Contributor

I'd really rather use a dedicated tool for exception handling, and then that tool can use whatever Slack integration to inform us of new exceptions.

@saneshark
Copy link
Contributor Author

Sentry is on its way but could be 1-2 weeks out. This is an interim solution because QA is right around the corner, in many instances these 500 errors already occur an no one has time to speculate on what raised them.

@saneshark
Copy link
Contributor Author

@markolson what would be an example of a dedicated tool for exception handling? NewRelic?

@saneshark
Copy link
Contributor Author

See the thread on issue 103 for example

@ayaleloehr
Copy link
Contributor

Sorry @saneshark but I'm going to close this. My gut is saying that automatically posting log messages from GovCloud to Slack has too high a probability to get us in trouble to be worth using as a stopgap measure until Sentry is available. I have access to AWS logs and if I can get timestamps for any exceptions you're looking for, I can pull the logs and pass them along.

@jkassemi is prioritizing Sentry as fast as the VA will let him go (AKA he is waiting on DNS).

@ayaleloehr ayaleloehr closed this Sep 30, 2016
earthboundkid added a commit that referenced this pull request Nov 3, 2016
@knkski knkski deleted the exception_notifications branch November 21, 2016 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants