Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement](Short Circuit) short circuit query supports IN #39468

Open
wants to merge 64 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
ec59d28
change default value of row_column_page_size to 16KB
lxr599 Jul 2, 2024
76a78da
allow to set row_column_page_size for tables
lxr599 Jul 4, 2024
288bf5a
[feature](Row store)allow to set row_column_page_size for tables
lxr599 Jul 4, 2024
b544145
format codes
lxr599 Jul 4, 2024
2339bd1
change default value of row_column_page_size to 16KB
lxr599 Jul 2, 2024
9a211b1
allow to set row_column_page_size for tables
lxr599 Jul 4, 2024
9552bd5
[feature](Row store)allow to set row_column_page_size for tables
lxr599 Jul 4, 2024
8c4c0f7
format codes
lxr599 Jul 4, 2024
a9e584a
rebase master
lxr599 Jul 5, 2024
eedaa21
Merge branch 'master' into page_size
lxr599 Jul 5, 2024
3f271b0
format codes
lxr599 Jul 5, 2024
843d26d
format fe codes
lxr599 Jul 5, 2024
2936a27
Merge remote-tracking branch 'upstream/master' into page_size
lxr599 Jul 8, 2024
82a3e84
Merge branch 'master' into page_size
lxr599 Jul 8, 2024
50dc477
Merge branch 'master' into page_size
lxr599 Jul 8, 2024
fe22103
display row_column_page_size only when row store enabled
lxr599 Jul 8, 2024
9706f9a
Merge branch 'master' into page_size
lxr599 Jul 10, 2024
9860a7f
add test case for property row_column_page_size
lxr599 Jul 10, 2024
29f3fd7
Merge branch 'master' into page_size
lxr599 Jul 10, 2024
78ba814
Merge branch 'apache:master' into page_size
lxr599 Jul 10, 2024
05374ba
add more test cases
lxr599 Jul 10, 2024
a607007
Merge branch 'master' into page_size
lxr599 Jul 11, 2024
c32b788
change the property name
lxr599 Jul 11, 2024
484160a
change names of vars related to row_store_page_size
lxr599 Jul 12, 2024
5c543f7
Merge remote-tracking branch 'upstream/master' into page_size
lxr599 Jul 15, 2024
617dafd
Merge branch 'master' into page_size
lxr599 Jul 16, 2024
e0ab2fd
Merge branch 'apache:master' into page_size
lxr599 Jul 16, 2024
779e08b
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Jul 23, 2024
a18e510
IN for short circuit query
lxr599 Aug 15, 2024
1d92fdf
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Aug 16, 2024
4d829dd
delete some unused data structure
lxr599 Aug 16, 2024
0dce55c
Format it!
lxr599 Aug 16, 2024
a1c8442
Format it!
lxr599 Aug 16, 2024
d55e2b2
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Aug 19, 2024
51947b4
`IN` supports prepared statement
lxr599 Aug 19, 2024
a810ab8
Completed all changes
lxr599 Aug 19, 2024
b33084f
Free some data structures early
lxr599 Aug 20, 2024
c625df0
add more cluster test cases
lxr599 Aug 28, 2024
2617a23
Merge branch 'master' of github.com:apache/doris into short_IN
lxr599 Aug 28, 2024
ec35c51
the inserted data will follow the leftmost matching principle
lxr599 Sep 6, 2024
b392345
Merge branch 'master' of github.com:apache/doris into short_IN
lxr599 Sep 6, 2024
b52b876
Merge remote-tracking branch 'origin/master' into short_IN
lxr599 Sep 6, 2024
2ae2e93
fix: can not get handler int PreparedStatement
lxr599 Sep 6, 2024
290dbbc
Align .groovy with the master branch
lxr599 Sep 6, 2024
482acb8
Add leftmost partition test case
lxr599 Sep 8, 2024
08a0bb2
remove unused import
lxr599 Sep 8, 2024
f0a333f
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Sep 9, 2024
03fa703
fix: create LiteralExpr for NULL correctly
lxr599 Sep 9, 2024
28b9848
remove `order by`
lxr599 Sep 9, 2024
c4a10c4
merge some test cases
lxr599 Sep 9, 2024
2b0d2d3
Point query cloud mode supports multiple partitions
lxr599 Sep 10, 2024
5d298e9
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Sep 10, 2024
4e2ba16
Fix NullPointer exception of hashmap
lxr599 Sep 10, 2024
7b965a9
fix: tableID is not tabletID
lxr599 Sep 10, 2024
a93d9e4
Round-robin candidate backends and use tablet id and replica id to id…
lxr599 Sep 12, 2024
fc562fb
Merge branch 'master' of github.com:lxr599/doris into short_IN
lxr599 Sep 12, 2024
9c61742
Format codes
lxr599 Sep 12, 2024
15d009c
not use replicaMetaTable in TabletInvertedIndex
eldenmoon Sep 13, 2024
e93d59a
change `required` to `optional` in protobuf
lxr599 Sep 13, 2024
683298a
add dynamic partition test case for point query
lxr599 Sep 13, 2024
13463e5
Merge pull request #1 from lxr599/in_test_case
lxr599 Sep 13, 2024
769b6fc
Merge remote-tracking branch 'origin/master' into short_IN
Liuyushiii Sep 29, 2024
bd7479f
Merge remote-tracking branch 'origin/master' into short_IN
lxr599 Sep 29, 2024
461a11a
Merge remote-tracking branch 'origin/short_IN' into short_IN
lxr599 Sep 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion be/src/olap/base_tablet.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,6 @@ Status BaseTablet::lookup_row_key(const Slice& encoded_key, TabletSchema* latest
}
auto& segments = segment_caches[i]->get_segments();
DCHECK_EQ(segments.size(), num_segments);

for (auto id : picked_segments) {
Status s = segments[id]->lookup_row_key(encoded_key, schema, with_seq_col, with_rowid,
&loc);
Expand Down
29 changes: 29 additions & 0 deletions be/src/service/internal_service.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -912,6 +912,35 @@ void PInternalService::tablet_fetch_data(google::protobuf::RpcController* contro
}
}

void PInternalService::tablet_batch_fetch_data(google::protobuf::RpcController* controller,
const PTabletBatchKeyLookupRequest* batchRequest,
PTabletBatchKeyLookupResponse* batchResponse,
google::protobuf::Closure* done) {
int request_count = batchRequest->sub_key_lookup_req_size();
batchResponse->mutable_sub_key_lookup_res()->Reserve(request_count);
[[maybe_unused]] auto* cntl = static_cast<brpc::Controller*>(controller);
bool ret =
_light_work_pool.try_offer([this, batchRequest, batchResponse, done, request_count]() {
Status st = Status::OK();
brpc::ClosureGuard guard(done);
for (int i = 0; i < request_count; ++i) {
batchResponse->add_sub_key_lookup_res();
const PTabletKeyLookupRequest* request = &batchRequest->sub_key_lookup_req(i);
PTabletKeyLookupResponse* response =
batchResponse->mutable_sub_key_lookup_res(i);
Status status = _tablet_fetch_data(request, response);
status.to_protobuf(response->mutable_status());
if (!status.ok()) {
st = status;
}
}
st.to_protobuf(batchResponse->mutable_status());
});
if (!ret) {
offer_failed(batchResponse, done, _light_work_pool);
}
}

void PInternalService::test_jdbc_connection(google::protobuf::RpcController* controller,
const PJdbcTestConnectionRequest* request,
PJdbcTestConnectionResult* result,
Expand Down
5 changes: 5 additions & 0 deletions be/src/service/internal_service.h
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,11 @@ class PInternalService : public PBackendService {
PTabletKeyLookupResponse* response,
google::protobuf::Closure* done) override;

void tablet_batch_fetch_data(google::protobuf::RpcController* controller,
const PTabletBatchKeyLookupRequest* batchRequest,
PTabletBatchKeyLookupResponse* batchResponse,
google::protobuf::Closure* done) override;

void test_jdbc_connection(google::protobuf::RpcController* controller,
const PJdbcTestConnectionRequest* request,
PJdbcTestConnectionResult* result,
Expand Down
5 changes: 5 additions & 0 deletions be/src/service/point_query_executor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,11 @@ Status PointQueryExecutor::init(const PTabletKeyLookupRequest* request,
auto cache_handle = LookupConnectionCache::instance()->get(uuid);
_binary_row_format = request->is_binary_row();
_tablet = DORIS_TRY(ExecEnv::get_tablet(request->tablet_id()));

if (_tablet->tablet_meta()->replica_id() != request->replica_id()) {
return Status::OK();
}

if (cache_handle != nullptr) {
_reusable = cache_handle;
_profile_metrics.hit_lookup_cache = true;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@
import java.math.BigInteger;
import java.nio.ByteBuffer;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import java.util.zip.CRC32;
Expand All @@ -79,6 +80,24 @@ public PartitionKey() {
types = Lists.newArrayList();
}

private PartitionKey(PartitionKey other) {
this.keys = new ArrayList<>(other.keys.size());
for (LiteralExpr expr : other.keys) {
try {
String value = expr.getStringValue();
if ("null".equalsIgnoreCase(value)) {
this.keys.add(NullLiteral.create(expr.getType()));
} else {
this.keys.add(LiteralExpr.create(value, expr.getType()));
}
} catch (Exception e) {
throw new RuntimeException("Create partition key failed: " + e.getMessage());
}
}
this.originHiveKeys = new ArrayList<>(other.originHiveKeys);
this.types = new ArrayList<>(other.types);
}

public void setDefaultListPartition(boolean isDefaultListPartitionKey) {
this.isDefaultListPartitionKey = isDefaultListPartitionKey;
}
Expand Down Expand Up @@ -205,6 +224,10 @@ public static PartitionKey createListPartitionKey(List<PartitionValue> values, L
return createListPartitionKeyWithTypes(values, types, false);
}

public static PartitionKey clone(PartitionKey other) {
return new PartitionKey(other);
}

public void pushColumn(LiteralExpr keyValue, PrimitiveType keyType) {
keys.add(keyValue);
types.add(keyType);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import org.apache.doris.nereids.trees.expressions.Cast;
import org.apache.doris.nereids.trees.expressions.EqualTo;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.InPredicate;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
Expand Down Expand Up @@ -53,7 +54,7 @@ private Expression removeCast(Expression expression) {
private boolean filterMatchShortCircuitCondition(LogicalFilter<LogicalOlapScan> filter) {
return filter.getConjuncts().stream().allMatch(
// all conjuncts match with pattern `key = ?`
expression -> (expression instanceof EqualTo)
expression -> ((expression instanceof EqualTo) || expression instanceof InPredicate)
&& (removeCast(expression.child(0)).isKeyColumnFromTable()
|| (expression.child(0) instanceof SlotReference
&& ((SlotReference) expression.child(0)).getName().equals(Column.DELETE_SIGN)))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,8 @@
import org.apache.doris.common.Config;

import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import java.util.Collection;
import java.util.List;
Expand All @@ -49,8 +48,6 @@
* If depth is larger than 'max_distribution_pruner_recursion_depth', all buckets will be return without pruning.
*/
public class HashDistributionPruner implements DistributionPruner {
private static final Logger LOG = LogManager.getLogger(HashDistributionPruner.class);

// partition list, sort by the hash code
private List<Long> bucketsList;
// partition columns
Expand All @@ -61,6 +58,18 @@ public class HashDistributionPruner implements DistributionPruner {

private boolean isBaseIndexSelected;

/*
* This map maintains a relationship between distribution keys and their corresponding tablet IDs.
* For example, if the distribution columns are (k1, k2, k3),
* and the tuple (1, 2, 3) is hashed into bucket 1001,
* then the `distributionKey2TabletIDs` would map the key (1, 2, 3) to the tablet ID 1001.
* (1, 2, 3) -> 1001
* Map structure:
* - Key: PartitionKey, representing a specific combination of distribution columns (e.g., k1, k2, k3).
* - Value: Set<Long>, containing the tablet IDs associated with the corresponding distribution key.
*/
private Map<PartitionKey, Set<Long>> distributionKey2TabletIDs = Maps.newHashMap();

public HashDistributionPruner(List<Long> bucketsList, List<Column> columns,
Map<String, PartitionColumnFilter> filters, int hashMod, boolean isBaseIndexSelected) {
this.bucketsList = bucketsList;
Expand All @@ -70,13 +79,21 @@ public HashDistributionPruner(List<Long> bucketsList, List<Column> columns,
this.isBaseIndexSelected = isBaseIndexSelected;
}

public Map<PartitionKey, Set<Long>> getDistributionKeysTabletIDs() {
return distributionKey2TabletIDs;
}

// columnId: which column to compute
// hashKey: the key which to compute hash value
public Collection<Long> prune(int columnId, PartitionKey hashKey, int complex) {
if (columnId == distributionColumns.size()) {
// compute Hash Key
long hashValue = hashKey.getHashValue();
return Lists.newArrayList(bucketsList.get((int) ((hashValue & 0xffffffff) % hashMod)));
List<Long> result =
Lists.newArrayList(bucketsList.get((int) ((hashValue & 0xffffffff) % hashMod)));
distributionKey2TabletIDs.computeIfAbsent(PartitionKey.clone(hashKey),
k -> Sets.newHashSet(result)).addAll(result);
return result;
}
Column keyColumn = distributionColumns.get(columnId);
String columnName = isBaseIndexSelected ? keyColumn.getName()
Expand Down Expand Up @@ -119,9 +136,6 @@ public Collection<Long> prune(int columnId, PartitionKey hashKey, int complex) {
Collection<Long> subList = prune(columnId + 1, hashKey, newComplex);
resultSet.addAll(subList);
hashKey.popColumn();
if (resultSet.size() >= bucketsList.size()) {
eldenmoon marked this conversation as resolved.
Show resolved Hide resolved
break;
}
}
return resultSet;
}
Expand Down
79 changes: 65 additions & 14 deletions fe/fe-core/src/main/java/org/apache/doris/planner/OlapScanNode.java
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@
import org.apache.doris.analysis.FunctionCallExpr;
import org.apache.doris.analysis.InPredicate;
import org.apache.doris.analysis.IntLiteral;
import org.apache.doris.analysis.LiteralExpr;
import org.apache.doris.analysis.PartitionNames;
import org.apache.doris.analysis.SlotDescriptor;
import org.apache.doris.analysis.SlotId;
Expand All @@ -52,6 +51,7 @@
import org.apache.doris.catalog.Partition.PartitionState;
import org.apache.doris.catalog.PartitionInfo;
import org.apache.doris.catalog.PartitionItem;
import org.apache.doris.catalog.PartitionKey;
import org.apache.doris.catalog.PartitionType;
import org.apache.doris.catalog.Replica;
import org.apache.doris.catalog.ScalarType;
Expand Down Expand Up @@ -98,9 +98,13 @@
import com.google.common.base.MoreObjects;
import com.google.common.base.Preconditions;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.HashBasedTable;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.google.common.collect.Range;
import com.google.common.collect.RangeMap;
import com.google.common.collect.Sets;
import com.google.common.collect.Table;
import org.apache.commons.collections.CollectionUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
Expand Down Expand Up @@ -194,7 +198,19 @@ public class OlapScanNode extends ScanNode {

private HashSet<Long> scanBackendIds = new HashSet<>();

private PartitionPruner partitionPruner = null;

private Map<Long, Integer> tabletId2BucketSeq = Maps.newHashMap();

// Maps partition column names to a RangeMap that associates ColumnBound ranges with lists of partition IDs,
// similar to the implementation in PartitionPrunerV2Base.
private Map<String, RangeMap<ColumnBound, List<Long>>> partitionCol2PartitionID = Maps.newHashMap();
lxr599 marked this conversation as resolved.
Show resolved Hide resolved

private Map<PartitionKey, Set<Long>> distributionKeys2TabletID = Maps.newHashMap();

/// tablet id -> (backend id -> replica)
private Table<Long, Long, Replica> scanBackendReplicaTable = HashBasedTable.create();

// a bucket seq may map to many tablets, and each tablet has a
// TScanRangeLocations.
public ArrayListMultimap<Integer, TScanRangeLocations> bucketSeq2locations = ArrayListMultimap.create();
Expand Down Expand Up @@ -259,6 +275,14 @@ public HashSet<Long> getScanBackendIds() {
return scanBackendIds;
}

public Map<String, RangeMap<ColumnBound, List<Long>>> getPartitionCol2PartitionID() {
return partitionCol2PartitionID;
}

public Map<PartitionKey, Set<Long>> getDistributionKeys2TabletID() {
return distributionKeys2TabletID;
}

public void setSampleTabletIds(List<Long> sampleTablets) {
if (sampleTablets != null) {
this.sampleTabletIds.addAll(sampleTablets);
Expand Down Expand Up @@ -294,6 +318,10 @@ public ArrayList<Long> getScanTabletIds() {
return scanTabletIds;
}

public Table<Long, Long, Replica> getScanBackendReplicaTable() {
return scanBackendReplicaTable;
}

public void setForceOpenPreAgg(boolean forceOpenPreAgg) {
this.forceOpenPreAgg = forceOpenPreAgg;
}
Expand Down Expand Up @@ -658,9 +686,9 @@ private void computeInaccurateCardinality() throws UserException {
cardinality = (long) statsDeriveResult.getRowCount();
}

// get the pruned partition IDs
private Collection<Long> partitionPrune(PartitionInfo partitionInfo,
PartitionNames partitionNames) throws AnalysisException {
PartitionPruner partitionPruner = null;
Map<Long, PartitionItem> keyItemMap;
if (partitionNames != null) {
keyItemMap = Maps.newHashMap();
Expand All @@ -677,13 +705,12 @@ private Collection<Long> partitionPrune(PartitionInfo partitionInfo,
if (partitionInfo.getType() == PartitionType.RANGE) {
if (isPointQuery() && partitionInfo.getPartitionColumns().size() == 1) {
// short circuit, a quick path to find partition
ColumnRange filterRange = columnNameToRange.get(partitionInfo.getPartitionColumns().get(0).getName());
LiteralExpr lowerBound = filterRange.getRangeSet().get().asRanges().stream()
.findFirst().get().lowerEndpoint().getValue();
LiteralExpr upperBound = filterRange.getRangeSet().get().asRanges().stream()
.findFirst().get().upperEndpoint().getValue();
Column col = partitionInfo.getPartitionColumns().get(0);
// todo: support range query
Set<Range<ColumnBound>> filterRanges =
columnNameToRange.get(col.getName()).getRangeSet().get().asRanges();
cachedPartitionPruner.update(keyItemMap);
return cachedPartitionPruner.prune(lowerBound, upperBound);
return cachedPartitionPruner.prune(filterRanges, col.getName(), partitionCol2PartitionID);
}
partitionPruner = new RangePartitionPrunerV2(keyItemMap,
partitionInfo.getPartitionColumns(), columnNameToRange);
Expand All @@ -701,12 +728,22 @@ private Collection<Long> distributionPrune(
switch (distributionInfo.getType()) {
case HASH: {
HashDistributionInfo info = (HashDistributionInfo) distributionInfo;
distributionPruner = new HashDistributionPruner(table.getTabletIdsInOrder(),
distributionPruner =
new HashDistributionPruner(table.getTabletIdsInOrder(),
info.getDistributionColumns(),
columnFilters,
info.getBucketNum(),
getSelectedIndexId() == olapTable.getBaseIndexId());
return distributionPruner.prune();
HashDistributionPruner hashPruner = (HashDistributionPruner) distributionPruner;
lxr599 marked this conversation as resolved.
Show resolved Hide resolved
Collection<Long> resultIDs = hashPruner.prune();
Map<PartitionKey, Set<Long>> newPrunedIDs = hashPruner.getDistributionKeysTabletIDs();
for (Map.Entry<PartitionKey, Set<Long>> entry : newPrunedIDs.entrySet()) {
distributionKeys2TabletID.merge(entry.getKey(), entry.getValue(), (existingSet, newSet) -> {
existingSet.addAll(newSet);
return existingSet;
});
}
return resultIDs;
}
case RANDOM: {
return null;
Expand Down Expand Up @@ -946,6 +983,7 @@ private void addScanRangeLocations(Partition partition,
collectedStat = true;
}
scanBackendIds.add(backend.getId());
scanBackendReplicaTable.put(tabletId, backend.getId(), replica);
// For skipping missing version of tablet, we only select the backend with the highest last
// success version replica to save as much data as possible.
if (skipMissingVersion) {
Expand Down Expand Up @@ -981,10 +1019,20 @@ private void computePartitionInfo() throws AnalysisException {
// Step1: compute partition ids
PartitionNames partitionNames = ((BaseTableRef) desc.getRef()).getPartitionNames();
PartitionInfo partitionInfo = olapTable.getPartitionInfo();
if (partitionInfo.getType() == PartitionType.RANGE || partitionInfo.getType() == PartitionType.LIST) {
selectedPartitionIds = partitionPrune(partitionInfo, partitionNames);
} else {
selectedPartitionIds = olapTable.getPartitionIds();
switch (partitionInfo.getType()) {
case RANGE:
selectedPartitionIds = partitionPrune(partitionInfo, partitionNames);
if (isPointQuery() && partitionPruner instanceof RangePartitionPrunerV2) {
RangePartitionPrunerV2 rangePartitionPruner = (RangePartitionPrunerV2) partitionPruner;
this.partitionCol2PartitionID = rangePartitionPruner.getPartitionCol2PartitionID();
}
break;
case LIST:
selectedPartitionIds = partitionPrune(partitionInfo, partitionNames);
break;
default:
selectedPartitionIds = olapTable.getPartitionIds();
break;
}
selectedPartitionIds = olapTable.selectNonEmptyPartitionIds(selectedPartitionIds);
selectedPartitionNum = selectedPartitionIds.size();
Expand Down Expand Up @@ -1287,13 +1335,16 @@ public List<TScanRangeLocations> lazyEvaluateRangeLocations() throws UserExcepti
// Lazy evaluation
selectedIndexId = olapTable.getBaseIndexId();
// Only key columns
distributionKeys2TabletID.clear();
partitionCol2PartitionID.clear();
computeColumnsFilter(olapTable.getBaseSchemaKeyColumns(), olapTable.getPartitionInfo());
computePartitionInfo();
scanBackendIds.clear();
scanTabletIds.clear();
bucketSeq2locations.clear();
scanReplicaIds.clear();
sampleTabletIds.clear();
scanBackendReplicaTable.clear();
try {
createScanRangeLocations();
} catch (AnalysisException e) {
Expand Down
Loading
Loading