-
-
Notifications
You must be signed in to change notification settings - Fork 20
Uptime Monitoring
Akeeba Panopticon since version 1.1.0 offers very basic uptime monitoring, but you should not be using it if you have more than a small handful of sites.
When you think of a site's “uptime” the only definition which makes sense is that typing the site's URL results in a response being generated in a reasonable amount of time, which contains the output you expect from the site — and this happens regardless of where in the world you are accessing the site from.
Just by this definition we are putting three major demands on an uptime monitoring system:
- It must measure and log the response time of the site, not merely if the HTTP request is (eventually) responded to or not. If the site takes 150 seconds to respond it's far from being usable by real humans.
- It must examine the contents of the response, not just the HTTP status code. Getting an HTTP 200 OK with an empty response, or with the default “It works!” page of the Apache web server does not mean that our site is up.
- The above must take place from disparate geographic locations and the results correlated to determine if a site is up. For example, if your site can be accessed just fine from Germany, very slowly from the US east coast, it takes well over two minutes to respond for users trying to access it from the US west coast, and returns a blank page for users in Australia and Japan then probably your site cannot be called online. This would be true even if the site is German-speaking only because you cannot guarantee neither the geographic path a request will take, nor that there are no Germans in Japan trying to access it.
In short, it gets very complicated, very fast.
The built-in site uptime monitoring in Panopticon only fulfills the first two requirements. The third requirement, however, is by definition impossible with Akeeba Panopticon which is self-hosted on a single server, therefore a single geographic location. Even if we created small clients to install on servers across the world the set up of this constellation of servers would be both complicated and expensive, thereby negating the reason you are using Panopticon in the first place.
Therefore, it makes far more sense to use a third party service. We recommend HetrixTools, the same service we use for our own sites. If you prefer something self-hosted, you can use Uptime Kuma, noting that being self-hosted it also does not fulfil the third requirement. Third party services and software offer many more features than the very basic and sparse core feature in Panopticon.
Remember. The core uptime monitoring in Panoption is intentionally very basic. It's there to just let you know when a site goes up or down, as seen from Panopticon's server. It's not there to let your clients, or their site's visitors whether your site is up or not. It's not there to record availability over a period. It's not there to be accurate, and used to determine whether the site meets a Service Level Agreement. We know how to do all of that, but that is a software package of its own which will be far more complicated to set up than Panopticon. It also makes no sense duplicating what other people have done – there are self-hosted site monitoring solutions, as we said. People requesting this kind of features will be told to read this page as a reminder of why their request is out of scope for Panopticon.
The built-in monitoring has exactly one goal: to notify you when the site goes down, or comes back up, as seen from Panopticon's server. If you want any additional feature you need to use a dedicated third party site monitoring software or service.
The uptime monitoring in Panopticon runs every minute and tries to access 50 sites at a time, in parallel. The mount of time it takes to complete this run is determined by the slowest site in the batch. If most sites respond within 1-2 seconds but one site responds in 13 seconds then it will take just over 13 seconds to check this batch. This can make things run slowly since the slowest site effectively acts as a roadblock.
The uptime monitoring task has higher priority than anything else. The more sites you have, the more CRON jobs you will need to set up to create enough availability for both uptime monitoring and site monitoring, maintenance, site scanning, and site backup tasks. That's why site monitoring is turned off by default.
Accessing a lot of sites hosted on the same server at once may be misidentified as an attack by the host, blocking Panopticon's access to your server. If this happens, your sites across an entire server or hosting company will appear to be down but you'll still be able to access them from a web browser. In this case, please ask your host to whitelist your Panopticon server's IP address.
Because of the blocking nature of long-running requests (see two paragraphs above) Panopticon has a hard timeout of 5 seconds waiting for a connection to be established, and 15 seconds in total for the server to respond. If you have a particularly slow site you may blow past this timeout limit, and the site will appear down. This is NOT a bug. In fact, a site which takes over 15 seconds to respond is practically down as very few, if any, people will wait that long for the site to load.
Redirections are always followed. There is no option to treat a redirection as a failure. This is intentional, as it's the default and only behaviour of web browsers. Remember, we are trying to determine if a person using a browser would perceive your site as functioning.
You only get one URL to test for each site being monitored. By default, it's the bare URL of the site, as determined by the API endpoint URL you have provided. You can enter a custom path, if you want, in the site's configuration. It is possible that you have a site which uses two or more applications, e.g. a Joomla! site with a WordPress blog in a subdirectory, or maybe an e-commerce system with a Joomla! or WordPress site in a subdirectory. Unfortunately, you won't be able to use additional URLs to monitor whether the other applications are working properly. This is only possible using a dedicated third party site monitoring software or service.
Site up and down events are recorded in the Site Actions Log, but there is no historical data of site availability provided anywhere. Such a feature will NOT be implemented.
The next question is, naturally, why not integrate with a third party service. They all offer an API.
There are three reasons.
First, integration requires an API key. We cannot include an API key with our mass–distributed software because it's both a violation of the Terms of Service of third party services, a stupid business move (we'd have to pay for everyone's uptime monitors), and a security concern (everyone could see everyone else's uptime monitors). This means that you'd have to create and set up your own API keys for your own Panopticon installation which, again, becomes a bit too involved but still manageable.
The second, and more important, reason is that every third party update monitoring service we've seen has very restrictive API usage limits. We could request an update on the site's status once every hour to once a day. This would be totally useless for any practical use, such as notifying you by email when your site is down. On the other hand, all of these services do allow you to set up email alerts and some of them even allow you to set up push notifications or SMS (text) alerts as well. Therefore, integrating with their API would yield much less functionality than these services already offer.
The third and final reason can be summed up as “nobody will be content no matter which service(s) we integrate with”. Should we choose HetrixTools? Uptime Robot? Something else? Which one? All of them? When does this madness stop? We end up in a situation where we need to implement and maintain a lot of integrations with a plethora of services, even though we know there is absolutely no benefit from doing so and it will lead to frustrated users submitting support requests for what cannot be fixed except by removing these integrations altogether.
Therefore, we chose not to implement an integration with any of these services.
Yes, but!
We are introducing a (primitive) plugin system in version 1.1.0 which allows you or any third party to extend Panopticon. This works by placing your custom plugins' code into the User Code folder. You can hire a developer to implement a custom integration with the uptime monitoring service of your choosing. This is possible and easy, as long as you abide by the software license.
Documentation Copyright ©2023–2024 Akeeba Ltd.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".
You can also obtain a copy of the GNU Free Documentation License from the Free Software Foundation
- Overview pages
- Working with sites
- Site Overview
- Backup Management with Akeeba Backup Pro
- Security Management with Admin Tools Pro
- Scheduled Update Summary
- Scheduled Action Summary
- Backup Tasks
- Scanner Tasks
- System Configuration
- Managing Sites
- Mail templates
- Users and Groups
- Tasks
- Log files
- Update Panopticon
- Database Backups
- Fixing your session save path
- The .htaccess file
- Advanced Customisation (user code)
- Plugins
- Custom CSS
- Custom Templates
- Advanced Permissions
- .env For Configuration