diff --git a/docs/agent/web-inspection-interface.md b/docs/agent/web-inspection-interface.mdx similarity index 81% rename from docs/agent/web-inspection-interface.md rename to docs/agent/web-inspection-interface.mdx index 10c6c8d43..5c8ffda91 100644 --- a/docs/agent/web-inspection-interface.md +++ b/docs/agent/web-inspection-interface.mdx @@ -2,6 +2,8 @@ The ngrok agent ships with a realtime inspection interface which allows you to see what traffic is sent to your upstream service and what responses it is returning. +The Web Inspection Interface is only available in the ngrok standalone agent and not in the ngrok Agent SDKs. If you are interested in viewing traffic across all endpoints, longer retention periods, or sharing traffic events with other teammates, check out the [Traffic Inspector in the ngrok Dashboard](/docs/obs/traffic-inspection/#ngrok-traffic-inspector) instead. + ## Inspecting requests Every HTTP request through your tunnels will be displayed in the inspection interface. After you start the ngrok agent, open [http://localhost:4040](http://localhost:4040) in a browser on the same machine. You will see all of the details of every request and response including the time, duration, source IP, headers, query parameters, request payload and response body as well as the raw bytes on the wire. @@ -40,6 +42,8 @@ You may specify multiple filters. If you do, requests will only be shown if they Developing for webhooks issued by external APIs can often slow down your development cycle by requiring you do some work, like dialing a phone, to trigger the hook request. ngrok allows you to replay any request with a single click, dramatically speeding up your iteration cycle. Click the **Replay** button at the top-right corner of any request on the web inspection UI to replay it. +Replay works via the local agent sending the request directly to your upstream service. As such, the replayed request will not be subject to any policies that exist on your cloud endpoint since those are applied prior to the request reaching the local agent. If you are interested in replaying the original request before the endpoint policies are applied and testing new policies, please use the [Traffic Inspector in the ngrok Dashboard](/docs/obs/traffic-inspection/#ngrok-traffic-inspector). + ###### Replay any request against your tunneled web server with one click ![](/img/docs/replay2.png) diff --git a/docs/integrations/azure-logs-ingestion/event-destination.mdx b/docs/integrations/azure-logs-ingestion/event-destination.mdx index 64b09ea83..949f13eaa 100644 --- a/docs/integrations/azure-logs-ingestion/event-destination.mdx +++ b/docs/integrations/azure-logs-ingestion/event-destination.mdx @@ -1,5 +1,5 @@ --- -title: Integrate with Azure Logs Ingestion using the ngrok API +title: Integrate with the Azure Logs Ingestion API description: Send network traffic logs from ngrok to Azure Logs Ingestion tags: - events @@ -12,7 +12,7 @@ tags: :::tip TL;DR -To send ngrok events to Azure Logs Ingestion: +To send ngrok events to Azure using the Azure Logs Ingestion API: 1. [Create a Log Analytics Workspace](#log-analytics-workspace) 1. [Create a Data Collection Endpoint](#data-collection-endpoint) @@ -24,10 +24,10 @@ To send ngrok events to Azure Logs Ingestion: ::: -This guide covers how to send ngrok events including network traffic logs into Azure Logs Integstion. -You may want to keep an audit log of configuration changes within your ngrok -account, record all traffic to your endpoints for active monitoring/troubleshooting, or -you may use Azure Logs Ingestion as a SIEM and want to use it for security inspections. +This guide covers how to send ngrok events including network traffic logs into Azure via the Logs Ingestion API. + +This is useful if you want to keep an audit log of configuration changes within your ngrok +account, record all traffic to your endpoints for active monitoring/troubleshooting, or leveraging it as a SIEM for security inspections. By integrating ngrok with Azure, you can: @@ -38,44 +38,50 @@ By integrating ngrok with Azure, you can: ## **Step 1**: Create a Log Analytics Workspace {#log-analytics-workspace} -1. Using a browser, log into your Azure portal. +These steps were adapted from the [Create a Logs Analytics Workspace docs from Microsoft](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace). + +1. Using a browser, log into your [Azure portal](https://portal.azure.com). 2. Navigate to the search bar and type in **Log Analytics Workspaces** -3. Click on the service entry (not the marketplace entry). +3. Click on the **Services** entry (not the Marketplace entry). ![search log analytics workspaces](img/search-workspaces.png) 4. Click **Create** on the top bar Log Analytics Workspace page. -5. Follow the wizard to create your Log Analytics Workspace, filling in the necessary region information, name, and resource group, before clicking **Review + Create**. +5. Follow the wizard to create your Log Analytics Workspace, filling in the necessary region information, name, and resource group, before clicking **Review + Create**. These values can be anything you like and do not impact ngrok's ability to send logs to your Azure account. 6. Click **Create** at the bottom of the review step to finally provision the Log Analytics Workspace. ![create log analytics workspaces](img/create-workspace-review.png) -You now have a Log Analytics Workspace, which will be the home for your data collection endpoint, tables, and rules. +You now have a **Log Analytics Workspace**, which will be the home for your data collection endpoint, tables, and rules. ## **Step 2**: Create a Data Collection Endpoint {#data-collection-endpoint} +These steps were adapted from the [Create a data collection endpoint](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-collection-endpoint-overview#create-a-data-collection-endpoint) docs from Microsoft. + 1. Navigate to the search bar and type in **Data Collection Endpoints** -2. Click on the service entry. +2. Click on the **Services** entry. ![search data collection endpoints](img/search-dce.png) 3. Click **Create** on the top bar Data Collection Endpoints page. -4. Follow the wizard to create your Data Collection Endpoint, filling in the necessary region information, name, and resource group, before clicking **Review + Create**. +4. Follow the wizard to create your Data Collection Endpoint, filling in the necessary region information, name, and resource group, before clicking **Review + Create**. These fields can be anything you like and to not impact ngrok's ability to send logs to your Azure account. -5. Click **Create** at the bottom of the review step to finally provision the Data Collection Endpoint. +5. Click **Create** at the bottom of the review step to provision the Data Collection Endpoint. ![create dce](img/create-dce.png) -You now have a Data Collection Endpoint, which is the network accessible service that ngrok connects via to send Azure events. +You now have a **Data Collection Endpoint**, which is the network accessible service that ngrok connects via to send events into Azure. ## **Step 3**: Create a DCR-based Custom Table in the Workspace {#data-collection-rule} +These steps were adapted from the [Create a new table in Log Analytics workspace](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal?source=recommendations#create-new-table-in-log-analytics-workspace) docs from Microsoft. + 1. Navigate to the **Log Analytics Workspaces** list once again. 2. Click the workspace you created previously in [**Step 1**](#log-analytics-workspace). @@ -86,7 +92,7 @@ You now have a Data Collection Endpoint, which is the network accessible service ![create dcr-based table](img/create-dcr-table.png) -5. Populate the table name with a name of your choice and the DCE field with the existing DCE you created in [**Step 2**](#data-collection-endpoint) +5. Populate the table name with a name of your choice and the DCE field with the existing DCE you created in [**Step 2**](#data-collection-endpoint). 6. Click **Create a new data collection rule** underneath the Data collection rule field, which opens a drawer. Fill out the resource group and name, before clicking **Done** on the drawer. @@ -94,7 +100,7 @@ You now have a Data Collection Endpoint, which is the network accessible service 7. Click **Next** in the table creation wizard. -8. Upload a sample json file with the following contents to the wizard. +8. Upload the following json file using the wizard. After uploading, you will notice a warning header "TimeGenerated field is not found in the sample provided" which is expected. ```json { @@ -106,16 +112,14 @@ You now have a Data Collection Endpoint, which is the network accessible service ``` :::tip Not to worry! - You will notice a warning header "TimeGenerated field is not found in the sample provided"; this is expected. We will remedy this by using the **Transformation Editor**. - ::: 9. Click the **Transformation editor** button on the top bar of the wizard, which will open a drawer. -10. Paste in the following and click **Run**. +10. Paste in the following transformation and click **Run**. ``` source @@ -132,13 +136,15 @@ source ![create table success](img/create-table-success.png) -You now have a Data Collection Rule properly configured for ngrok events, alongside a table where the data will be stored. +You now have a **Data Collection Rule** properly configured for ngrok events, alongside a table where the data will be stored. ## **Step 4**: Create a Microsoft Entra Application {#entra-application} +These steps were adapted from the [Create a Microsoft Entra Application](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal?source=recommendations#create-microsoft-entra-application) docs from Microsoft. + 1. Navigate to the search bar and type in **Entra ID**. -2. Select the **Microsoft Entra ID** service, not the marketplace item. +2. Select the **Microsoft Entra ID** under **Services**, not the Marketplace item. ![search entra id](img/search-entra.png) @@ -146,21 +152,23 @@ You now have a Data Collection Rule properly configured for ngrok events, alongs 4. Click **New registration** -5. Name the application **ngrok-events** or something similar to clarify it's use; this entity will be what ngrok uses to authenticate with your data collection endpoint. +5. Name the application **ngrok-events** or something similar to clarify its use; this entity will be what ngrok uses to authenticate with your data collection endpoint. -6. Select **Accounts in this organizational directory only (ngrok only - Single tenant)** for the account type +6. Select the first radio option, **Accounts in this organizational directory only** for the account type 7. Click **Register** ![register app](img/register.png) -You have now created an Entra ID App Registration, which is a service user construct that grants roles/access to services like ngrok. +You have now created an **Entra ID App Registration**, which is a service user construct that grants roles/access to services like ngrok. ## **Step 5**: Assign IAM permissions to the Application for the DCR {#dcr-iam} +These steps were adapted from the [Assign permissions to the DCR](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal?source=recommendations#assign-permissions-to-the-dcr) docs from Microsoft. + 1. Navigate to the search bar and type in **Data collection rules**. -2. Select the **Data collection rules** service. +2. Select the **Data collection rules** option under **Services**. 3. Click on the Data collection rule created in [**Step 3**](#data-collection-rule). @@ -194,7 +202,7 @@ You have now granted access for the ngrok application to ingest logs into the DC ## **Step 6**: Gather necessary data for Event Destination {#event-destination-data} -In order to create an event destination, we need: +In order to create an event destination in your ngrok Dashboard, we will need to gather the following information from what we just created in Azure: - the Tenant ID - the Application's Client ID @@ -217,7 +225,7 @@ In order to create an event destination, we need: 6. Fill in the description and expiry date with the desired values, before clicking **Add**. -7. Copy the value below - **this value will no longer be available once you navigate away**. +7. Copy the secret value provided by Azure - **this value will no longer be available once you navigate away**. ![app secret](img/app-secret.png) @@ -231,19 +239,23 @@ In order to create an event destination, we need: 11. Navigate to **Configuration -> Data sources** in the sidebar. -12. Copy the **Data source** name, which should start with **Custom\_** and end with **\_CL**. This is the DCR stream name. +12. Copy the **Data source** name, which should start with **Custom\_** and end with **\_CL**. This is the **DCR stream name**. ![stream name](img/stream-name.png) -13. Finally, navigate to **Data collection endpoints** in the top searchbar. +13. Finally, navigate to **Data collection endpoints** in the top search bar. 14. Select the Data collection endpoint you created in [**Step 2**](#data-collection-endpoint). -15. In the **Overview** tab, copy the logs ingestion URI. +15. In the **Overview** tab, copy the **Logs Ingestion URI**. You now have all the required data to create an event destination with ngrok! -## **Step 7**: Create a Log Analytics Workspace {#log-analytics-workspace} +## **Step 7**: Create the new Event Destination in ngrok {#create-event-destination} + +At this point, you can choose to create the event destination via [the ngrok API](#create-via-ngrok-api) or [through the ngrok Dashboard](#create-via-dashboard). + +### Creating via the ngrok API {#create-via-ngrok-api} 1. Create an API key with ngrok. You can do this via the [ngrok dashboard](https://dashboard.ngrok.com/api). @@ -318,3 +330,7 @@ https://api.ngrok.com/event_subscriptions ``` After getting a 200 response, your event destination is successfully configured and subscribed to the set of events types you desire. + +### Creating via the ngrok Dashboard {#create-via-ngrok-dashboard} + +Coming soon! diff --git a/docs/integrations/azure-logs-ingestion/index.mdx b/docs/integrations/azure-logs-ingestion/index.mdx index beb29e897..8a1674a33 100644 --- a/docs/integrations/azure-logs-ingestion/index.mdx +++ b/docs/integrations/azure-logs-ingestion/index.mdx @@ -3,10 +3,9 @@ name: azure-logs-ingestion title: Azure Logs Ingestion Integration Hub sidebar_label: Azure Logs Ingestion description: | - Using Azure Logs Ingestion event destination for ngrok event observability. - All with security and access from ngrok. + Send ngrok audit & traffic logs into Azure using the Azure Logs Ingestion Event Destination excerpt: | - Sending ngrok events into Azure Logs Ingestion. + Send ngrok events into Azure Logs Ingestion. --- import IntegrationPageList from "@site/src/components/IntegrationPageList"; diff --git a/docs/obs/index.mdx b/docs/obs/index.mdx index deacd735c..f7139700e 100644 --- a/docs/obs/index.mdx +++ b/docs/obs/index.mdx @@ -8,13 +8,13 @@ pagination_next: obs/reference ## Overview Whenever changes occur in your ngrok account or when traffic transits through -your endpoints, an event is fired. You may subscribe to these events, filter +your endpoints, an event is fired. You may subscribe to these events and in some cases, filter them to those relevant to you and publish them to any number of destinations. ngrok's event system was designed for three primary use cases: -- Sending logs of your ngrok traffic to logging services like Datadog and - CloudWatch Logs +- Sending logs of your ngrok traffic to external services such as Datadog and + Amazon CloudWatch Logs - Sending audit logs of ngrok configuration changes to your SIEM - Enabling you to programmatically respond to events on your ngrok account @@ -53,14 +53,15 @@ and publishing events. We also publish guides to get started with each of ngrok's Event Destinations: -- **[Datadog Logs](/docs/integrations/datadog/event-destination/)** - **[AWS CloudWatch Logs](/docs/integrations/amazon-cloudwatch/event-destination/)** - **[AWS Firehose](/docs/integrations/amazon-firehose/event-destination/)** - **[AWS Kinesis](/docs/integrations/amazon-kinesis/event-destination/)** +- **[Azure Logs Ingestion](/integrations/azure-logs-ingestion/event-destination/)** +- **[Datadog Logs](/docs/integrations/datadog/event-destination/)** ## Event Subscriptions {#subscriptions} -Event subscriptions define which Event Sources to capture and which +Event subscriptions define which [Event Sources](/obs/reference/) to capture and which destinations to publish to. If you're familiar with other event systems, they may call this a _listener_, a _hook_, a _probe_ or a _tap_. @@ -112,8 +113,8 @@ for further detail. #### Filters {#filters} -You may specify a filter on the Event Sources of [Traffic -Events](/docs/obs/reference/#traffic-events). Filters are a boolean +You may specify a filter on [Traffic +Events](/docs/obs/reference/#traffic-events) since the velocity of these events can be quite high. Filters are a boolean expression defined in [Google's Common Expression Language (CEL)](https://github.com/google/cel-spec/blob/master/doc/langdef.md#standard). Filters are evaluated on each event as it is published to determine whether it @@ -161,7 +162,7 @@ ev.conn.server_name == "ngrok-docs-examples.ngrok.dev" ## Event Destinations {#destinations} An Event Destination encapsulates the configuration to publish events to other -systems. Other event systems may call this a _sink_. +systems. Other event systems may call this a _sink_ or _output_. Event Destinations are typically third-party logging aggregators. The following destinations are currently supported: @@ -170,16 +171,19 @@ destinations are currently supported: - [Amazon Firehose](/docs/integrations/amazon-firehose/event-destination/) - [Amazon Kinesis](/docs/integrations/amazon-kinesis/event-destination/) - AWS S3 (via Kinesis Firehose) +- [Azure Logs Ingestion](/docs/integrations/azure-logs-ingestion/event-destination/) - [Datadog Logs](/docs/integrations/datadog/event-destination/) Each destination requires provider-specific configuration. If you create a destination in the ngrok dashboard, you'll be prompted to send a test event to verify the integration. +::::note When configuring AWS destinations you'll be prompted to optionally download a small helper script which will automatically configure the appropriate IAM objects necessary for integration. You may also set these values up via the AWS Console or tools like Terraform or Pulumi. +:::: Amazon S3 is not a directly supported destination. Instead, configure Amazon Firehose to [deliver events into an S3 @@ -189,7 +193,7 @@ bucket](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html# Events are serialized as JSON when they are published to a destination. -Events include the following fields: +All events include the following fields: | Name | Description | Example | | ----------------- | --------------------------------------------------------------------------- | -------------------------------- | @@ -200,7 +204,7 @@ Events include the following fields: | `object` | the event object | See examples below | | `principal` | an object of the principal who actioned this event, null for traffic events | See example below | -The `object` property of the event is distinct for each Event Source. +The `object` property of the event is distinct for each Event Source and contain a JSON object with additional information about the event. For [Audit Events](/obs/reference#audit-events), the `object` representation is identical to its API resource at the time of capture. @@ -212,7 +216,7 @@ representation because they have no corresponding API resource definition. The `principal` object in every event describe the user or bot user responsible for initiating the event. Principal is defined for all Audit Events -and it is `null` for Traffic Events. +and it is `null` for Traffic Events (since they do not include a credential). | Name | Description | Example | | ---------- | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | @@ -312,3 +316,7 @@ and it is `null` for Traffic Events. Events are available to all ngrok users with a free tier that includes the transmission of up to 10,000 events per month. + +The Traffic Inspector is included with all ngrok accounts with a +retention period of 3 days. An additional 90 days can be purchased as +an add-on to any production plan. diff --git a/docs/obs/traffic-inspection.mdx b/docs/obs/traffic-inspection.mdx index ce162cf0e..c38edc7c0 100644 --- a/docs/obs/traffic-inspection.mdx +++ b/docs/obs/traffic-inspection.mdx @@ -47,6 +47,8 @@ Replaying requests allows users to re-send an HTTP request upstream without dire Once Full Capture is enabled, a replay button will appear for fully captured requests in the traffic event details pane of the dashboard. If this button is unavailable, it likely indicates that the request occurred before Full Capture was fully activated, or the request was too large and has been truncated. +Replay works by resending your original unmodified request to your endpoint. It will be subject to the policies of that endpoint at the time of replay (which could be different than the original request). + When replaying a request to your service, we aim to make minimal changes. However, some alterations are inevitable. Here's a list of changes we make to replayed requests: - The replayed request will originate from a different IP address, so any IP restrictions may prevent the replay from reaching your upstream.