diff --git a/content.en/websites/blog.ipfs.tech.md b/content.en/websites/blog.ipfs.tech.md index a3dca97b3..a70b6b150 100644 --- a/content.en/websites/blog.ipfs.tech.md +++ b/content.en/websites/blog.ipfs.tech.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-blogipfstech-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.ipfs.tech-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.ipfs.tech-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-blogipfstech} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-blog.ipfs.tech.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-blog.ipfs.tech.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-blogipfstech} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-blog.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-blog.ipfs.tech.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-blogipfstech} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-blog.ipfs.tech.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-blog.ipfs.tech.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-blogipfstech-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.ipfs.tech-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.ipfs.tech-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-blogipfstech} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.ipfs.tech.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.ipfs.tech.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-blogipfstech-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.ipfs.tech-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.ipfs.tech-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-blogipfstech} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.ipfs.tech.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-blogipfstech-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-blogipfstech-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-blogipfstech-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-blogipfstech-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.ipfs.tech-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-blogipfstech-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.ipfs.tech-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.ipfs.tech-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/blog.libp2p.io.md b/content.en/websites/blog.libp2p.io.md index 3729bbe68..dfbced4d3 100644 --- a/content.en/websites/blog.libp2p.io.md +++ b/content.en/websites/blog.libp2p.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-bloglibp2pio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.libp2p.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-blog.libp2p.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-bloglibp2pio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-blog.libp2p.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-blog.libp2p.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-bloglibp2pio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-blog.libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-blog.libp2p.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-bloglibp2pio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-blog.libp2p.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-blog.libp2p.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-bloglibp2pio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.libp2p.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-blog.libp2p.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-bloglibp2pio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.libp2p.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-blog.libp2p.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-bloglibp2pio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.libp2p.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-blog.libp2p.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-bloglibp2pio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-blog.libp2p.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-bloglibp2pio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-bloglibp2pio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-bloglibp2pio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-bloglibp2pio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-blog.libp2p.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-bloglibp2pio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.libp2p.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-blog.libp2p.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/consensuslab.world.md b/content.en/websites/consensuslab.world.md index 0f684b895..83dc5091c 100644 --- a/content.en/websites/consensuslab.world.md +++ b/content.en/websites/consensuslab.world.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-consensuslabworld-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-consensuslab.world-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-consensuslab.world-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-consensuslabworld} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-consensuslab.world.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-consensuslab.world.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-consensuslabworld} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-consensuslab.world.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-consensuslab.world.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-consensuslabworld} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-consensuslab.world.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-consensuslab.world.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-consensuslabworld-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-consensuslab.world-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-consensuslab.world-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-consensuslabworld} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-consensuslab.world.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-consensuslab.world.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-consensuslabworld-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-consensuslab.world-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-consensuslab.world-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-consensuslabworld} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-consensuslab.world.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-consensuslab.world.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-consensuslabworld-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-consensuslabworld-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-consensuslabworld-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-consensuslabworld-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-consensuslab.world-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-consensuslabworld-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-consensuslab.world-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-consensuslab.world-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/docs.ipfs.tech.md b/content.en/websites/docs.ipfs.tech.md index 36e21c2d4..f3084a5e3 100644 --- a/content.en/websites/docs.ipfs.tech.md +++ b/content.en/websites/docs.ipfs.tech.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-docsipfstech-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-docs.ipfs.tech-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-docs.ipfs.tech-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-docsipfstech} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-docs.ipfs.tech.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-docs.ipfs.tech.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-docsipfstech} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-docs.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-docs.ipfs.tech.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-docsipfstech} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-docs.ipfs.tech.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-docs.ipfs.tech.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-docsipfstech-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-docs.ipfs.tech-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-docs.ipfs.tech-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-docsipfstech} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-docs.ipfs.tech.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-docs.ipfs.tech.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-docsipfstech-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-docs.ipfs.tech-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-docs.ipfs.tech-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-docsipfstech} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-docs.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-docs.ipfs.tech.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-docsipfstech-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-docsipfstech-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-docsipfstech-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-docsipfstech-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.ipfs.tech-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-docsipfstech-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-docs.ipfs.tech-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-docs.ipfs.tech-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/docs.libp2p.io.md b/content.en/websites/docs.libp2p.io.md index 4eb8e46fa..f20879528 100644 --- a/content.en/websites/docs.libp2p.io.md +++ b/content.en/websites/docs.libp2p.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-docslibp2pio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-docs.libp2p.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-docs.libp2p.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-docslibp2pio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-docs.libp2p.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-docs.libp2p.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-docslibp2pio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-docs.libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-docs.libp2p.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-docslibp2pio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-docs.libp2p.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-docs.libp2p.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-docslibp2pio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-docs.libp2p.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-docs.libp2p.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-docslibp2pio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-docs.libp2p.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-docs.libp2p.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-docslibp2pio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-docs.libp2p.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-docs.libp2p.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-docslibp2pio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-docs.libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-docs.libp2p.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-docslibp2pio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-docslibp2pio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-docslibp2pio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-docslibp2pio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-docs.libp2p.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-docslibp2pio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-docs.libp2p.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-docs.libp2p.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/drand.love.md b/content.en/websites/drand.love.md index 55cc7f99c..2d9d4e765 100644 --- a/content.en/websites/drand.love.md +++ b/content.en/websites/drand.love.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-drandlove-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-drand.love-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-drand.love-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-drandlove} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-drand.love.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-drand.love.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-drandlove} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-drand.love.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-drand.love.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-drandlove} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-drand.love.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-drand.love.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-drandlove-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-drand.love-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-drand.love-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-drandlove} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-drand.love.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-drand.love.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-drandlove-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-drand.love-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-drand.love-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-drandlove} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-drand.love.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-drand.love.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-drandlove-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-drandlove-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-drandlove-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-drandlove-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-drand.love-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-drandlove-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-drand.love-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-drand.love-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/filecoin.io.md b/content.en/websites/filecoin.io.md index 7383bc20b..342f2a7d4 100644 --- a/content.en/websites/filecoin.io.md +++ b/content.en/websites/filecoin.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-filecoinio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-filecoin.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-filecoin.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-filecoinio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-filecoin.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-filecoin.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-filecoinio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-filecoin.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-filecoin.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-filecoinio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-filecoin.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-filecoin.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-filecoinio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-filecoin.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-filecoin.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-filecoinio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-filecoin.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-filecoin.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-filecoinio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-filecoin.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-filecoin.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-filecoinio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-filecoin.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-filecoin.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-filecoinio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-filecoinio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-filecoinio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-filecoinio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-filecoin.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-filecoinio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-filecoin.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-filecoin.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/gen.template b/content.en/websites/gen.template index d9e8d3daf..d286ddc70 100644 --- a/content.en/websites/gen.template +++ b/content.en/websites/gen.template @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-|| .Anchor ||-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-|| .Website ||-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-|| .Website ||-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-|| .Anchor ||} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-|| .Website ||.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-|| .Website ||.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-|| .Anchor ||} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-|| .Website ||.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-|| .Website ||.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-|| .Anchor ||} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-|| .Website ||.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-|| .Website ||.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-|| .Anchor ||-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-|| .Website ||-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-|| .Website ||-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-|| .Anchor ||} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-|| .Website ||.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-|| .Website ||.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-|| .Anchor ||-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-|| .Website ||-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-|| .Website ||-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-|| .Anchor ||} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-|| .Website ||.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-|| .Website ||.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-|| .Anchor ||-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-|| .Anchor ||-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-|| .Anchor ||-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-|| .Anchor ||-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-|| .Website ||-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-|| .Anchor ||-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-|| .Website ||-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-|| .Website ||-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/green.filecoin.io.md b/content.en/websites/green.filecoin.io.md index 35e20a3c3..8ae57647c 100644 --- a/content.en/websites/green.filecoin.io.md +++ b/content.en/websites/green.filecoin.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-greenfilecoinio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-green.filecoin.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-green.filecoin.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-greenfilecoinio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-green.filecoin.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-green.filecoin.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-greenfilecoinio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-green.filecoin.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-green.filecoin.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-greenfilecoinio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-green.filecoin.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-green.filecoin.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-greenfilecoinio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-green.filecoin.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-green.filecoin.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-greenfilecoinio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-green.filecoin.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-green.filecoin.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-greenfilecoinio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-green.filecoin.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-green.filecoin.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-greenfilecoinio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-green.filecoin.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-green.filecoin.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-greenfilecoinio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-greenfilecoinio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-greenfilecoinio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-greenfilecoinio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-green.filecoin.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-greenfilecoinio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-green.filecoin.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-green.filecoin.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/ipfs.tech.md b/content.en/websites/ipfs.tech.md index 921eed455..0d5362d76 100644 --- a/content.en/websites/ipfs.tech.md +++ b/content.en/websites/ipfs.tech.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-ipfstech-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-ipfs.tech-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-ipfs.tech-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-ipfstech} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-ipfs.tech.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-ipfs.tech.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-ipfstech} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-ipfs.tech.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-ipfstech} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-ipfs.tech.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-ipfs.tech.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-ipfstech-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-ipfs.tech-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-ipfs.tech-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-ipfstech} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-ipfs.tech.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-ipfs.tech.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-ipfstech-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-ipfs.tech-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-ipfs.tech-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-ipfstech} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-ipfs.tech.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-ipfstech-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-ipfstech-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-ipfstech-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-ipfstech-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipfs.tech-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ipfstech-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-ipfs.tech-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-ipfs.tech-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/ipld.io.md b/content.en/websites/ipld.io.md index 208acdd62..60bd29dd9 100644 --- a/content.en/websites/ipld.io.md +++ b/content.en/websites/ipld.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-ipldio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-ipld.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-ipld.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-ipldio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-ipld.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-ipld.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-ipldio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-ipld.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-ipld.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-ipldio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-ipld.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-ipld.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-ipldio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-ipld.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-ipld.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-ipldio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-ipld.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-ipld.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-ipldio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-ipld.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-ipld.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-ipldio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-ipld.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-ipld.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-ipldio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-ipldio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-ipldio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-ipldio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-ipld.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ipldio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-ipld.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-ipld.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/libp2p.io.md b/content.en/websites/libp2p.io.md index 93fb5395d..f0d5164cc 100644 --- a/content.en/websites/libp2p.io.md +++ b/content.en/websites/libp2p.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-libp2pio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-libp2p.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-libp2p.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-libp2pio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-libp2p.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-libp2p.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-libp2pio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-libp2p.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-libp2pio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-libp2p.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-libp2p.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-libp2pio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-libp2p.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-libp2p.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-libp2pio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-libp2p.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-libp2p.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-libp2pio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-libp2p.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-libp2p.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-libp2pio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-libp2p.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-libp2p.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-libp2pio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-libp2pio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-libp2pio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-libp2pio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-libp2p.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-libp2pio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-libp2p.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-libp2p.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/probelab.io.md b/content.en/websites/probelab.io.md index b231ac586..d5e57cd0a 100644 --- a/content.en/websites/probelab.io.md +++ b/content.en/websites/probelab.io.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-probelabio-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-probelab.io-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-probelab.io-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-probelabio} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-probelab.io.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-probelab.io.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-probelabio} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-probelab.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-probelab.io.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-probelabio} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-probelab.io.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-probelab.io.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-probelabio-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-probelab.io-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-probelab.io-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-probelabio} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-probelab.io.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-probelab.io.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-probelabio-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-probelab.io-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-probelab.io-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-probelabio} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-probelab.io.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-probelab.io.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-probelabio-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-probelabio-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-probelabio-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-probelabio-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-probelab.io-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-probelabio-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-probelab.io-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-probelab.io-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/protocol.ai.md b/content.en/websites/protocol.ai.md index 1722a7aff..ec34081d4 100644 --- a/content.en/websites/protocol.ai.md +++ b/content.en/websites/protocol.ai.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-protocolai-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-protocol.ai-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-protocol.ai-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-protocolai} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-protocol.ai.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-protocol.ai.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-protocolai} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-protocol.ai.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-protocol.ai.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-protocolai} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-protocol.ai.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-protocol.ai.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-protocolai-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-protocol.ai-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-protocol.ai-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-protocolai} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-protocol.ai.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-protocol.ai.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-protocolai-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-protocol.ai-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-protocol.ai-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-protocolai} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-protocol.ai.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-protocol.ai.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-protocolai-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-protocolai-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-protocolai-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-protocolai-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-protocol.ai-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-protocolai-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-protocol.ai-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-protocol.ai-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/research.protocol.ai.md b/content.en/websites/research.protocol.ai.md index b0e864f93..86f097fbe 100644 --- a/content.en/websites/research.protocol.ai.md +++ b/content.en/websites/research.protocol.ai.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-researchprotocolai-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-research.protocol.ai-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-research.protocol.ai-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-researchprotocolai} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-research.protocol.ai.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-research.protocol.ai.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-researchprotocolai} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-research.protocol.ai.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-research.protocol.ai.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-researchprotocolai} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-research.protocol.ai.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-research.protocol.ai.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-researchprotocolai-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-research.protocol.ai-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-research.protocol.ai-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-researchprotocolai} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-research.protocol.ai.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-research.protocol.ai.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-researchprotocolai-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-research.protocol.ai-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-research.protocol.ai-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-researchprotocolai} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-research.protocol.ai.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-research.protocol.ai.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-researchprotocolai-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-researchprotocolai-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-researchprotocolai-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-researchprotocolai-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-research.protocol.ai-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-researchprotocolai-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-research.protocol.ai-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-research.protocol.ai-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/specs.ipfs.tech.md b/content.en/websites/specs.ipfs.tech.md index b57ee2bac..4856fb9a5 100644 --- a/content.en/websites/specs.ipfs.tech.md +++ b/content.en/websites/specs.ipfs.tech.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-specsipfstech-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-specs.ipfs.tech-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-specs.ipfs.tech-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-specsipfstech} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-specs.ipfs.tech.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-specs.ipfs.tech.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-specsipfstech} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-specs.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-specs.ipfs.tech.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-specsipfstech} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-specs.ipfs.tech.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-specs.ipfs.tech.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-specsipfstech-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-specs.ipfs.tech-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-specs.ipfs.tech-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-specsipfstech} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-specs.ipfs.tech.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-specs.ipfs.tech.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-specsipfstech-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-specs.ipfs.tech-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-specs.ipfs.tech-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-specsipfstech} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-specs.ipfs.tech.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-specs.ipfs.tech.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-specsipfstech-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-specsipfstech-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-specsipfstech-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-specsipfstech-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-specs.ipfs.tech-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-specsipfstech-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-specs.ipfs.tech-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-specs.ipfs.tech-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/strn.network.md b/content.en/websites/strn.network.md index 7ffb5f26d..974342726 100644 --- a/content.en/websites/strn.network.md +++ b/content.en/websites/strn.network.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-strnnetwork-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-strn.network-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-strn.network-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-strnnetwork} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-strn.network.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-strn.network.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-strnnetwork} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-strn.network.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-strn.network.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-strnnetwork} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-strn.network.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-strn.network.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-strnnetwork-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-strn.network-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-strn.network-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-strnnetwork} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-strn.network.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-strn.network.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-strnnetwork-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-strn.network-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-strn.network-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-strnnetwork} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-strn.network.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-strn.network.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-strnnetwork-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-strnnetwork-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-strnnetwork-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-strnnetwork-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-strn.network-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-strnnetwork-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-strn.network-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-strn.network-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all | diff --git a/content.en/websites/web3.storage.md b/content.en/websites/web3.storage.md index 1b1be6682..196ad6ed0 100644 --- a/content.en/websites/web3.storage.md +++ b/content.en/websites/web3.storage.md @@ -11,9 +11,9 @@ We initially present an Overview of the performance, followed by Trends, i.e., t ## Overview -### Performance over Kubo {#website-snapshot-performance-gauge-web3storage-kubo} +### Performance over Kubo {#website-snapshot-performance-gauge-kubo} -{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-web3.storage-KUBO.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-snapshot-performance-gauge-web3.storage-KUBO.json" height="300px" id="website-snapshot-performance-gauge-kubo" >}} The graph presents a comparison of two crucial Kubo web performance metrics (90th percentile): Time to First Byte (TTFB) and First Contentful Paint (FCP). The data displayed shows the 90th percentile of both metrics and was gathered during the previous week. @@ -26,9 +26,9 @@ The graph utilizes shaded areas in different colors to denote performance catego The website measurement data can be viewed from two distinct perspectives: as a snapshot of the most recent data points (see [Snapshot](#snapshot)) and as a progression over time. This section provides insights into the overall trends by presenting metrics over time. Analyzing data solely as a snapshot can offer a momentary glimpse into the current state, but it may lack context and fail to capture the bigger picture. However, by examining data over time, patterns and trends emerge, allowing for a deeper understanding of the data's trajectory and potential future outcomes. This section's focus on metrics over time enhances the ability to identify and interpret these trends, enabling informed decision-making and strategic planning. -### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics-web3storage} +### Web-Vitals Metrics (90th Percentile) {#website-trend-metrics} -{{< plotly json="../../plots/latest/website-trend-metrics-web3.storage.json" height="300px" >}} +{{< plotly json="../../plots/latest/website-trend-metrics-web3.storage.json" height="300px" id="website-trend-metrics" >}} The [Web-Vitals](https://web.dev/vitals/) Metrics graph provides high-level insights into the websites' performance, allowing you to monitor key metrics over the past 30 days. @@ -41,9 +41,9 @@ The graph showcases five essential web vitals metrics, enabling you to assess th It is important to note that the resulting metrics are artificial composites and do not reflect the specific performance of any individual region. Rather, it allows discerning general tendencies and fluctuations in these metrics across the combined dataset. -### Unique IPFS Website Providers per Day {#website-trend-providers-web3storage} +### Unique IPFS Website Providers per Day {#website-trend-providers} -{{< plotly json="../../plots/latest/website-trend-providers-web3.storage.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-providers-web3.storage.json" height="350px" id="website-trend-providers" >}} One of the primary reasons why a website (CID) might not be available over Kubo is that there are no providers for the content. This graph showcases the unique providing peers as identified by distinct [PeerIDs](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) discovered throughout a specific day in the IPFS DHT. We look up the IPNS record for the website which gives us the current CID of the website's content. Then, we look up all provider records in the IPFS DHT and record the distinct PeerIDs of the providing peers. Finally, we try to connect to all the discovered peers, and based on the outcome, classify them as: @@ -56,17 +56,17 @@ In order for a website (or CID more in general) to be available and accessible i In addition to the peer-related information, the graph also includes black markers that represent the number of website deployments per day (count shown on the right handside y-axis). Deployments are determined by monitoring the CIDs found within the websites' IPNS records. If the CID changes, we consider this a new deployment. -#### Known Stable Providers {#website-trend-hosters-web3storage} +#### Known Stable Providers {#website-trend-hosters} -{{< plotly json="../../plots/latest/website-trend-hosters-web3.storage.json" height="250px" >}} +{{< plotly json="../../plots/latest/website-trend-hosters-web3.storage.json" height="250px" id="website-trend-hosters" >}} For the above graph, we obtained the PeerIDs from two hosting providers/pinning services: [Protocol Labs' IPFS Websites Collab-Cluster](https://collab.ipfscluster.io/) and [Fleek](https://fleek.co). We monitor how many of their PeerIDs appear in the list of peers providing the website to the DHT on a daily basis. We gather the list of providing peers every six hours from seven different vantage points with two different Kubo versions (see [Tiros](/tools/tiros)), and aggregate the distinct peers we have found. Then we count the number of peerIDs that belong to either Fleek or PL's Collab-Cluster. We monitor six PeerIDs for Fleek and seven PeerIDs for PL's Collab-Cluster ([source](https://github.com/plprobelab/website/blob/main/config/plotdefs-website/website-trend-hosters.yaml#L8)). The sum of both bars should always be less than or equal to the number of `Reachable Non-Relayed` peers in the previous graph. More importantly, if both bars (for Fleek and the PL Cluster) are at zero, then this very likely means that the website has no stable providers on the IPFS DHT. -### IPFS Retrieval Errors {#website-trend-retrieval-errors-web3storage-kubo} +### IPFS Retrieval Errors {#website-trend-retrieval-errors-kubo} -{{< plotly json="../../plots/latest/website-trend-retrieval-errors-web3.storage-KUBO.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-trend-retrieval-errors-web3.storage-KUBO.json" height="350px" id="website-trend-retrieval-errors-kubo" >}} This graph shows error rates of website requests via Kubo over the past 30 days. It combines measurements from all our measurement regions. The x-axis represents days, while the y-axis displays error rates. Additionally, black markers indicate the actual number of requests (from our infrastructure) per day with the corresponding count shown on the right handside y-axis. This graph offers a concise overview of error rates and probing volume, aiding users in assessing website availability in IPFS. @@ -74,9 +74,9 @@ This graph shows error rates of website requests via Kubo over the past 30 days. This section presents a snapshot of the most recent week's data, offering a concise overview of the current state. By focusing on this specific timeframe, readers gain immediate insights into the prevailing metrics and performance indicators. While a single snapshot may lack the context of historical data, it serves as a valuable tool for assessing the present situation. Analyzing the data in this way allows for quick identification of key trends, patterns, and potential areas of concern or success. This section's emphasis on the snapshot of data enables decision-makers to make informed, real-time assessments and take immediate actions based on the current status. -### Website Probes {#website-snapshot-probes-count-web3storage} +### Website Probes {#website-snapshot-probes-count} -{{< plotly json="../../plots/latest/website-snapshot-probes-count-web3.storage.json" height="150px" >}} +{{< plotly json="../../plots/latest/website-snapshot-probes-count-web3.storage.json" height="150px" id="website-snapshot-probes-count" >}} We perform on average 500 requests per week from each of the seven AWS regions where our infrastructure is deployed using [Kubo](https://github.com/ipfs/kubo) @@ -84,19 +84,19 @@ and HTTP. Above is the number of requests for each of the request methods. The n vary depending on errors during the fetching process, which we look into more detail further down. -### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-web3storage-kubo-eu-central-1} +### Web-Vitals Metrics measured from Europe using Kubo {#website-snapshot-web-vitals-barchart-kubo-eu-central-1} -{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-web3.storage-KUBO-eu-central-1.json" height="400px" >}} +{{< plotly json="../../plots/latest/website-snapshot-web-vitals-barchart-web3.storage-KUBO-eu-central-1.json" height="400px" id="website-snapshot-web-vitals-barchart-kubo-eu-central-1" >}} [What do `Fatal`, `Undefined`, `Poor` etc. mean?](#values) During the designated time period (indicated in the bottom right corner), we conducted multiple measurements for the five metrics shown along the x-axis. The y-axis represents the proportion of measurement outcomes from the total number of measurements specifically taken in the eu-central-1 region. This visual representation allows us to analyze the distribution of the metric ratings within the specified time frame for that particular region. -### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors-web3storage} +### Website Probing Success rate from different Regions {#website-snapshot-retrieval-errors} -{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-web3.storage.json" height="350px" >}} +{{< plotly json="../../plots/latest/website-snapshot-retrieval-errors-web3.storage.json" height="350px" id="website-snapshot-retrieval-errors" >}} -While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-web3storage-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). +While the graph on [IPFS Retrieval Errors](#website-trend-retrieval-errors-kubo) further up shows the Kubo retrieval errors over time, this graph shows the retrieval errors as seen from different regions in the specified time interval (bottom right corner). Alongside the Kubo retrieval outcomes it also shows the HTTP results. The black markers again show the number of probes performed in each region with each request method (count shown on the right handside y-axis). ### Kubo Metrics by Region @@ -104,21 +104,21 @@ This series of graphs presents a comprehensive analysis of latency performance a To provide context and aid interpretation, the graphs incorporate shaded background areas. These areas are color-coded, with green representing good performance, yellow indicating areas that require improvement, and red denoting poor performance. The thresholds are defined by [web-vitals](https://web.dev/vitals) (more info below in [Metrics](#metrics)). By analyzing the position of the CDF lines in relation to the shaded regions, one can quickly identify regions with superior, average, or subpar latency performance for TTFB, FCP, and LCP. -#### Time To First Byte {#website-snapshot-metric-cdf-web3storage-kubo-ttfb} +#### Time To First Byte {#website-snapshot-metric-cdf-kubo-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-ttfb.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-ttfb.json" height="320px" id="website-snapshot-metric-cdf-kubo-ttfb" >}} -#### First Contentful Paint {#website-snapshot-metric-cdf-web3storage-kubo-fcp} +#### First Contentful Paint {#website-snapshot-metric-cdf-kubo-fcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-fcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-fcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-fcp" >}} -#### Largest Contentful Paint {#website-snapshot-metric-cdf-web3storage-kubo-lcp} +#### Largest Contentful Paint {#website-snapshot-metric-cdf-kubo-lcp} -{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-lcp.json" height="320px" >}} +{{< plotly json="../../plots/latest/website-snapshot-metric-cdf-web3.storage-KUBO-lcp.json" height="320px" id="website-snapshot-metric-cdf-kubo-lcp" >}} -### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-web3storage-ttfb} +### Kubo vs HTTP Latency Comparison (TTFB) {#website-snapshot-http-ratio-ttfb} -{{< plotly json="../../plots/latest/website-snapshot-http-ratio-web3.storage-ttfb.json" height="500px" >}} +{{< plotly json="../../plots/latest/website-snapshot-http-ratio-web3.storage-ttfb.json" height="500px" id="ebsite-snapshot-http-ratio-ttfb" >}} We calculated different percentiles for the Time To First Byte (TTFB) metric in different regions for website requests that were done via Kubo and via plain HTTP. Then we divided the values of Kubo by the ones from HTTP. A resulting number greater than `1` means that Kubo was slower than HTTP in that region for that percentile. Conversely, a number less than `1` means that Kubo was faster. @@ -165,4 +165,4 @@ TTI measures the time it takes for a web page to become fully interactive and re | **Needs Improvement** | The value was larger than the `good` but smaller than the `poor` threshold (see [Metrics](#metrics)) | | **Poor** | The value was larger than the `poor` threshold (see [Metrics](#metrics)) | | **Undefined** | We could not gather the metric because of internal measurement errors (our fault) | -| **Fatal** | We could not gather the metric because we could not retrieve the website at all | \ No newline at end of file +| **Fatal** | We could not gather the metric because we could not retrieve the website at all |