Skip to content

Commit

Permalink
Datetime validation fix. (#315)
Browse files Browse the repository at this point in the history
* Datetime validation fix.

Fixed issues with default date and time. 
Updated documentation.

* Input and validation mods

Combined the date and time fields into one input.
Converting the input date and time from local machine TZ to UTC
Changed version number
  • Loading branch information
nemonster authored Oct 11, 2019
1 parent 275f34f commit 60865ff
Show file tree
Hide file tree
Showing 4 changed files with 73 additions and 82 deletions.
30 changes: 14 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,12 +184,11 @@ of the clusters being monitored in the format: `cluster name`   &nbsp
authentication or encryption options( --ssl, --bypassHostNameVerify, etc.) necessary to log into that cluster are required. A superuser role is recommended.
* The cluster_id of the cluster you are extracting monitoring data for is also required. If you are unsure of what this is
you can obtain id's for all the monitored clusters by using the --list parameter along with the host and auth inputs.
* To select the range of data extracted use the --startDate and --startTime parameters to set the point from which statistics
will be displayed, along with the --interval paremeter that specifies how many hours of data will be included.
* All extraction ranges should be in UTC. Make sure to adjust the start date to reflect the appropriate time zone for your system when choosing the range
of data to view.
* The date and interval parameters are not required - if a parameter is not supplied the utility will use defaults to generate one. If no date, time or interval
are specified the collection will start 6 hours back from when the utility is run and cover up to the present.
* To select the range of data extracted use the --start and --interval parameters to set the point at which to start collecting events and how
many hours of events to collect.
* The start date/time will be converted from the current machine's time zone to UTC before querying.
* The date, time and interval parameters are not required - if a parameter is not supplied the utility will use defaults to generate one. If no date, time or interval
are specified the collection will start 6 hours back from when the utility is run and cover up to the present.
* The monitoring indices types being collected are as follows: cluster_stats, node_stats, indices_stats, index_stats, shards, job_stats, ccr_stats,
and ccr_auto_follow_stats.
* Notice: not all the information contained in the standard diagnostic is going to be available in the monitoring extraction. That is because it
Expand All @@ -200,12 +199,11 @@ with the Elasticsearch Monitoring team.

The additional parameters:
* `--id` _REQUIRED_     The cluster_id of the cluster you wish to retrieve data for. Because multiple clusters may be monitored this is necessary to retrieve the correct subset of data. If you are not sure, see the --list option example below to see which clusters are available.
* `--startDate`     Date for the earliest day to be extracted. Defaults to the date the utility was run, in UTC, minus the current interval value. Must be in the format yyyy-MM-dd.
* `--startTime`     The clock time of that date when the requested statistics begin. Defaults to the time the utility was run in UTC. Must be in the format HH:mm 24 hour format.
* `--interval`     The number of hours of statistics you wish to collect, starting from the stop date/time you specified and moving backward.
Default value of 6. Minimum value of 1, maximum value of 12.
* `--interval`     The number of hours of statistics you wish to collect. Defaults to the current date and time minus the default interval. Default value of 6. Minimum value of 1, maximum value of 12.
* `--start`     Required format: 'yyyy-MM-dd HH:mm'    The combined date and time for the earliest point to be extracted. Must be enclosed in quotes due to the space. Time should be in 24 hour format.
Defaults to the current date and time, minus the default interval.
* `--list`     Lists the clusters available data extraction on this montioring server. It will provide the cluster_name, the cluster_id. If this is
a cloud cluster and the metadata.display_name was setn it will be displayed as well.
a cloud cluster and the metadata.display_name was set it will be displayed as well.

#### Examples

Expand All @@ -215,15 +213,15 @@ The additional parameters:
```
##### Specifies a specific date, time and uses default interval 6 hours:
```$xslt
sudo ./export-momitoring.sh --host 10.0.0.20 -u elastic -p --ssl --id 37G473XV7843 --startDate 2019-08-25 --startTime 08:30
sudo ./export-momitoring.sh --host 10.0.0.20 -u elastic -p --ssl --id 37G473XV7843 --start '2019-08-25 08:30'
```
##### Specifies an 8 hour interval from time the extract was run.
##### Specifies the last 8 hours of data.
```$xslt
sudo ./export-momitoring.sh --host 10.0.0.20 -u elastic -p --ssl --id 37G473XV7843 --interval 8
```
##### Specifies a specific date, time and interval:
```$xslt
sudo ./export-momitoring.sh --host 10.0.0.20 -u elastic -p --ssl --id 37G473XV7843 --startDate 2019-08-25 --startTime 08:30 --interval 10
sudo ./export-momitoring.sh --host 10.0.0.20 -u elastic -p --ssl --id 37G473XV7843 --start '2019-08-25 17:45' --interval 10
```
##### Lists the clusters availble in this monitoring cluster
```$xslt
Expand All @@ -233,10 +231,10 @@ The additional parameters:
# Experimental - Monitoring Data Import

Once you have an archive of exported monitoring data, you can import this into an ES 7 instance that has monitoring enabled. Only ES 7 is supported as a target cluster.
* You will need an installed instance of the diagnostic utility installed. This does not need to be on the same
* You will need an installed instance of the diagnostic utility. This does not need to be on the same
host as the ES monitoring instance, but it does need to be on the same host as the archive you wish to import since it will need to read the archive file.
As with all other diag functions, a recent Java runtime must be installed.
* This will only work with a monitoring export archive produced by the diagnostic utility. It will not work with a standard diagnostic bundle or something the customer puts together.
* This will only work with a monitoring export archive produced by the diagnostic utility. It will not work with a standard diagnostic bundle or a custom archive.
* The only required parameters are the host/login information for the monitoring cluster and the absolute path to the archive you wish to import.
* `--input` _REQUIRED_     Absolute path to the archive you wish to import. No symlinks, please. The name format will
be similar to a standard diagnostic: `monitoring-export-<Datestamp>-<Timestamp>`.
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

<groupId>com.elasticsearch</groupId>
<artifactId>support-diagnostics</artifactId>
<version>7.0.10</version>
<version>7.1.0</version>
<packaging>jar</packaging>
<name>Support Diagnostics Utilities</name>
<properties>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
package com.elastic.support.monitoring;

import com.beust.jcommander.Parameter;
import com.elastic.support.config.Constants;
import com.elastic.support.config.ElasticClientInputs;
import com.elastic.support.util.SystemProperties;
import com.elastic.support.util.SystemUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
Expand All @@ -16,112 +13,93 @@
public class MonitoringExportInputs extends ElasticClientInputs {

private static final Logger logger = LogManager.getLogger(ElasticClientInputs.class);
private static int defaultInterval = 6;

@Parameter(names = {"--id"}, description = "Required except when the list command is used: The cluster_uuid of the monitored cluster you wish to extract data for. If you do not know this you can obtain it from that cluster using <protocol>://<host>:port/ .")
protected String clusterId;
public String getClusterId() {
return clusterId;
}

public void setClusterId(String clusterId) {
this.clusterId = clusterId;
}

@Parameter(names = {"--startDate"}, description = "Date for the ending day of the extraction. Defaults to today's date. Must be in the format yyyy-MM-dd.")
protected String startDate = "";
public String getStartDate() {
return startDate;
}
public void setStartDate(String startDate) {
this.startDate = startDate;
}

@Parameter(names = {"--startTime"}, description = "The clock time you wish to cut off collection statistics. Defaults to 6 the current time. Must be in the format HH:mm.")
public String startTime = "";
public String getStartTime() {
return startTime;
}
public void setStartTime(String startTime) {
this.startTime = startTime;
@Parameter(names = {"--interval"}, description = "Number of hours back to collect statistics. Defaults to 6 hours, but but can be set as high as 12.")
int interval = defaultInterval;
public void setInterval(int interval) {
this.interval = interval;
}

@Parameter(names = {"--interval"}, description = "Number of hours back to collect statistics. Defaults to 6 hours, but may be specified up to 12.")
int interval = 6;

public int getInterval() {
return interval;
}

public void setInterval(int interval) {
this.interval = interval;
@Parameter(names = {"--start"}, description = "Date and time for the starting point of the extraction. Defaults to today's date and time, minus the 6 hour default interval in UTC. Must be in the 24 hour format yyyy-MM-dd HH:mm.")
protected String start = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm").format(ZonedDateTime.now(ZoneId.of("+0")).minusHours(defaultInterval));
public void setStart(String start) {
this.start = start;
}
public String getStart() {
return start;
}

@Parameter(names = {"--list"}, description = "List the clusters available on the monitoring cluster.")
boolean listClusters = false;
public boolean isListClusters() {
return listClusters;
}

public void setListClusters(boolean listClusters) {
this.listClusters = listClusters;
}

// Generated during the validate method for use by the query.
protected String queryStartDate;
protected String queryEndDate;

public boolean validate(){

if (! super.validate()){
public boolean validate() {

if (!super.validate()) {
return false;
}

boolean passed = true;
if(! listClusters){
if(StringUtils.isEmpty(clusterId) ){
if (!listClusters) {
if (StringUtils.isEmpty(clusterId)) {
logger.warn("A cluster id is required to extract monitoring data.");
return false;
}

if(interval < 1 || interval > 12){
if (interval < 1 || interval > 12) {
logger.warn("Interval must be between 1 and 12");
passed = false;
}

if(passed == false){
if (passed == false) {
return passed;
}
}

try {
ZonedDateTime start = null, stop = null;
// Set up the start and stop dates.
// Adjust for the offset by adding the reversed value given
// that when you reindex it everything will go in as UTC.
if(StringUtils.isEmpty(startDate)){
// Default the stop point to the current datetime, UTC. If the specified an offset so they don't have to manually
// calculate the UTC diff apply that.
queryEndDate = DateTimeFormatter.ofPattern("yyyy-MM-ddTHH:mm").format(ZonedDateTime.now() ) + ":00+00:00";
stop = ZonedDateTime.parse(queryEndDate, DateTimeFormatter.ISO_OFFSET_DATE_TIME);

// Since we're using now as a default start date, get the start by moving it back in time by the interval, which may also be a default.
start = stop.minusHours(interval);
}
else{
queryStartDate= startDate +"T" + startTime + ":00+00:00";
start = ZonedDateTime.parse(queryStartDate, DateTimeFormatter.ISO_OFFSET_DATE_TIME);
stop = start.plusHours(interval);
}

ZonedDateTime current = ZonedDateTime.now( ZoneId.of("+0") );
if ( stop.isAfter(current)) {
logger.info("Warning: The collection interval designates a stopping point after the current date and time. This may result in less data than expected.");
ZonedDateTime workingStart = null, workingStop = null;
start = start.replace(" ", "T");
workingStart = ZonedDateTime.parse(start + ":00+00:00", DateTimeFormatter.ISO_OFFSET_DATE_TIME);
workingStop = workingStart.plusHours(interval);

ZonedDateTime current = ZonedDateTime.now(ZoneId.of("+0"));
if (workingStop.isAfter(current)) {
logger.info("Warning: The input collection interval designates a stopping point after the current date and time. This may result in less data than expected.");
workingStop = current;
}

// Generate the string subs to be used in the query.
queryStartDate = start.getYear() + "-" + start.getMonthValue() + "-" + start.getDayOfMonth() + "T" + start.getHour() + ":" + start.getMinute() + ":00.000Z";
queryEndDate = stop.getYear() + "-" + stop.getMonthValue() + "-" + stop.getDayOfMonth() + "T" + stop.getHour() + ":" + stop.getMinute() + ":00.000Z";
queryStartDate = (DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm").format(workingStart) + ":00.000Z").replace(" ", "T");
queryEndDate = (DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm").format(workingStop) + ":00.000Z").replace(" ", "T");

}
catch (Exception e){
logger.warn("Invalid Date or Time format. Please enter the date in format YYYY-MM-dd and the date in HH:mm");
} catch (Exception e) {
logger.warn("Invalid Date or Time format. Please enter the date in format YYYY-MM-dd HH:mm");
passed = false;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,44 +16,59 @@ public class TestExportInputValidation {
@Test
public void testExportInputValidations(){

MonitoringExportInputs mei = new MonitoringExportInputs();

// Check cluster id validation
MonitoringExportInputs mei = new MonitoringExportInputs();
boolean valid = mei.validate();
assertEquals(false, valid);

mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setStart("08-29-2019 02:25");
valid = mei.validate();
assertEquals(true, valid);
assertEquals(false, valid);

mei.setStartDate("08-29-2019");
mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setStart("2019-08-29 22:22:22");
valid = mei.validate();
assertEquals(false, valid);

mei.setStartDate("2019-08-29");
mei = new MonitoringExportInputs();
mei.setClusterId("test");
valid = mei.validate();
assertEquals(true, valid);

mei.setStartTime("2:25:2");
mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setStart("2019-08-29 02:25");
valid = mei.validate();
assertEquals(false, valid);
assertEquals(true, valid);

mei.setStartTime("02:25");
mei = new MonitoringExportInputs();
mei.setClusterId("test");
valid = mei.validate();
assertEquals(true, valid);

mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setInterval(0);
valid = mei.validate();
assertEquals(false, valid);

mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setInterval(13);
valid = mei.validate();
assertEquals(false, valid);

mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setInterval(1);
valid = mei.validate();
assertEquals(true, valid);

mei = new MonitoringExportInputs();
mei.setClusterId("test");
mei.setInterval(12);
valid = mei.validate();
assertEquals(true, valid);
Expand Down

0 comments on commit 60865ff

Please sign in to comment.