Skip to content

Commit

Permalink
Make website plot anchors consistent (#58)
Browse files Browse the repository at this point in the history
  • Loading branch information
iand authored Jul 6, 2023
1 parent d8940b4 commit d1db76f
Show file tree
Hide file tree
Showing 18 changed files with 468 additions and 468 deletions.
52 changes: 26 additions & 26 deletions content.en/websites/blog.ipfs.tech.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t

## Overview

### Performance over Kubo {#website-snapshot-performance-gauge-blogipfstech-kubo}
### Performance over Kubo {#website-snapshot-performance-gauge-kubo}

{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.ipfs.tech-KUBO.json" height="300px" >}}
{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.ipfs.tech-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}}

The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP).
The data displayed shows the 90th percentile of both metrics and was gathered during the previous week.
Expand All @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego

The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning.

### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-blogipfstech}
### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics}

{{< plotly json="../../plots/latest/website-trend-metrics-blog.ipfs.tech.json" height="300px" >}}
{{< plotly json="../../plots/latest/website-trend-metrics-blog.ipfs.tech.json" height="300px" id="website-trend-metrics" >}}

The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days.

Expand All @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th

It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset.

### Unique IPFS Website Providers per Day {#website-trend-providers-blogipfstech}
### Unique IPFS Website Providers per Day {#website-trend-providers}

{{< plotly json="../../plots/latest/website-trend-providers-blog.ipfs.tech.json" height="350px" >}}
{{< plotly json="../../plots/latest/website-trend-providers-blog.ipfs.tech.json" height="350px" id="website-trend-providers" >}}

One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as:

Expand All @@ -56,69 +56,69 @@ In order for a website (or CID more in general) to be available and accessible i
In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis).
Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment.

#### Known Stable Providers {#website-trend-hosters-blogipfstech}
#### Known Stable Providers {#website-trend-hosters}

{{< plotly json="../../plots/latest/website-trend-hosters-blog.ipfs.tech.json" height="250px" >}}
{{< plotly json="../../plots/latest/website-trend-hosters-blog.ipfs.tech.json" height="250px" id="website-trend-hosters" >}}

For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph.

More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT.

### IPFS Retrieval Errors {#website-trend-retrieval-errors-blogipfstech-kubo}
### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo}

{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.ipfs.tech-KUBO.json" height="350px" >}}
{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.ipfs.tech-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}}

This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS.

## Snapshot

This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status.

### Website Probes {#website-snapshot-probes-count-blogipfstech}
### Website Probes {#website-snapshot-probes-count}

{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.ipfs.tech.json" height="150px" >}}
{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.ipfs.tech.json" height="150px" id="website-snapshot-probes-count" >}}

We perform on average 500 requests per week from each of the seven AWS regions
where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo)
and HTTP. Above is the number of requests for each of the request methods. The number may
vary depending on errors during the fetching process, which we look into more
detail further down.

### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-blogipfstech-kubo-eu-central-1}
### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1}

{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.ipfs.tech-KUBO-eu-central-1.json" height="400px" >}}
{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.ipfs.tech-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}}

<small>[What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values)</small>

During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region.

### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-blogipfstech}
### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors}

{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.ipfs.tech.json" height="350px" >}}
{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.ipfs.tech.json" height="350px" id="website-snapshot-retrieval-errors" >}}

While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-blogipfstech-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis).
While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis).

### Kubo Metrics by Region

This series of graphs presents a comprehensive analysis of latency performance across different regions using [Cumulative Distribution Functions (CDFs)](https://en.wikipedia.org/wiki/Cumulative_distribution_function). The primary focus is on three crucial metrics: Time to First Byte (TTFB), First Contentful Paint (FCP), and Largest Contentful Paint (LCP). Each graph in the series shows the CDFs of a specific metric from all our measured regions. CDFs allow for a holistic view of the latency distribution, showcasing how often specific latency values occur within each region.

To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP.

#### Time To First Byte {#website-snapshot-metric-cdf-blogipfstech-kubo-ttfb}
#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb}

{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-ttfb.json" height="320px" >}}
{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}}

#### First Contentful Paint {#website-snapshot-metric-cdf-blogipfstech-kubo-fcp}
#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp}

{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-fcp.json" height="320px" >}}
{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}}

#### Largest Contentful Paint {#website-snapshot-metric-cdf-blogipfstech-kubo-lcp}
#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp}

{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-lcp.json" height="320px" >}}
{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}}

### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-blogipfstech-ttfb}
### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb}

{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.ipfs.tech-ttfb.json" height="500px" >}}
{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.ipfs.tech-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}}

We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster.

Expand Down Expand Up @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re
| **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) |
| **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) |
| **Undefined** | We could not gather the metric because of internal measurement errors (our fault) |
| **Fatal** | We could not gather the metric because we could not retrieve the website at all |
| **Fatal** | We could not gather the metric because we could not retrieve the website at all |
Loading

0 comments on commit d1db76f

Please sign in to comment.