Setup time: 5 Min

Integrate All Quiet with your email notifications seamlessly. Automatically generate a unique email address and send alerts from observability platforms directly to All Quiet, streamlining incident management.

1. Create Prometheus Integration on All Quiet

Login into your All Quiet account.

Create Integration

  1. Click on the Integrations > Inbound Tab.
  2. Click on Create New Integration.

Select Prometheus for the integration’s type

  1. Enter a display name for your integration, e.g. “Prometheus Alertmanager”.
  2. Select a team.
  3. Select “Prometheus Alertmanager” as the type.
  4. Click Create integration.

Copy Webhook URL

After successfully creating your Prometheus Alertmanager integration, make sure to copy the webhook URL.

2. Configure Prometheus Alertmanager

Once you’ve set up an integration of type “Prometheus Alertmanager” with All Quiet, the next crucial steps involve configuring your Prometheus and Alertmanager instances. This is essential for ensuring that your monitoring setup can effectively send incidents to the All Quiet webhook. In this part of the guide, we will walk you through simple yet effective configuration examples for both Prometheus and Alertmanager.

Setting Up Prometheus

First, let’s start with the Prometheus configuration. Your prometheus.yml should include the necessary scrape configs to monitor your targets. Here’s an example of a basic configuration: In your prometheus.yml, the configuration should primarily include scrape_configs and alerting details. Below is an example configuration:

  - job_name: ''
    scrape_interval: 5s
    scheme: https
    metrics_path: /status
      - targets: ['']

  - "*.rules"

    - scheme: http
        - targets: [ '' ]

In this configuration,scrape_configs defines the job for scraping metrics from, with a frequent interval of every 5 seconds. We’re observing our own platform in this example :). The https scheme and/status metrics path dictate how Prometheus accesses the data.

The rule_files section tells Prometheus to load any alerting rules from files ending with .rules.

The alerting section is crucial for the integration. It specifies that Prometheus should send alerts to an Alertmanager instance located at

With these settings, Prometheus is configured to monitor closely and forward alerts to Alertmanager, which then communicates with the All Quiet platform, ensuring efficient incident management.

Setting Up Alert Rules

After configuring the prometheus.yml file, the next step in integrating Prometheus with All Quiet is to set up alert rules. Alert rules in Prometheus define the conditions under which an alert should be fired. Below is a sample alert rule file that demonstrates how to create a rule for monitoring response times.

Here’s the alert rule configuration:

- name:
  - alert: Response Time slow
    expr: scrape_duration_seconds{job=""} > 0.1
    for: 5s
      severity: critical
      description: "Response time is bad"

This rule is set up under a group named The rule Response Time slowtriggers an alert if the scrape_duration_seconds for the job exceeds 0.1 seconds, sustained over a period of 5 seconds. This means if the response time of the monitored service goes beyond 100 milliseconds and stays that way for at least 5 seconds, an alert is triggered.

The labels section classifies the alert’s severity as critical, which can be useful for routing and handling the alert. The annotations section provides a descriptive message for the alert, e.g. indicating that the response time of the service is poor. :)

By implementing this alert rule, you can effectively monitor critical performance metrics like response times and ensure that such issues are promptly flagged and communicated to the All Quiet platform for efficient incident management.

Setting Up Alertmanager

The final step in integrating Prometheus Alertmanager with All Quiet is to configure the Alertmanager itself. This configuration ensures that Alertmanager appropriately routes, groups, and sends alerts to the All Quiet platform. Here’s how to set up the Alertmanager using the provided YAML configuration:

In this configuration:

  group_wait: 5s
  group_interval: 5s
  repeat_interval: 20s
  receiver: 'allquiet'

  - name: 'allquiet'
      - url: ''
  • The route section defines how alerts are processed and sent to receivers. group_waitsets the time to wait before sending a notification about new alerts that are added to a group of alerts. group_interval sets the interval between sending notifications about the same group of alerts, while repeat_interval controls how long to wait before sending repeat notifications.
  • The receiver parameter within the route is set to 'allquiet'. This tells Alertmanager to use the allquiet receiver for notifications.
  • In the receivers section, a receiver named allquiet is defined. This receiver useswebhook_configs to send alerts to the specified URL, which is the webhook provided by All Quiet in Copy Webhook URL.

By applying this configuration, you ensure that Alertmanager routes alerts to All Quiet efficiently. The alerts are grouped and sent based on the defined intervals, and the webhook URL ensures that these alerts are received by All Quiet for effective incident management. This setup completes the integration process, enabling your monitoring system to communicate seamlessly with All Quiet.

3. Test Your Integration

You’re almost done. 🥳 The next steps are merely there to verify if everything’s setup correctly!

Navigate back to All Quiet and your integration that you’ve created in Create Prometheus Alertmanager Integration .

  1. Click Reload to load your most recent payloads.
  2. Click ← Select to load the test payload from the previous step.
  3. Observe how the mapping transforms the Prometheus Alertmanager payload into an All Quiet incident.
Prometheus Alertmanager is now successfully integrated with All Quiet, following the detailed setup and verification steps provided.