-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relational directives #641
base: develop
Are you sure you want to change the base?
Conversation
} | ||
// currently supporting only drop column | ||
// SQL will be returned as "DROP COLUMN col1, col2" | ||
String sql = ((RelationalDirective) directive).getSQL(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not invoke execute on directives and offload directive execution to RecipePipelineExecutor. We can introduce a new function for relationalDirective execute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point. will look into offloading directive execution to RecipePipelineExecutor.
String sql = ((RelationalDirective) directive).getSQL(); | ||
List<String> cols = getColumnsOfDropSQL(sql); | ||
for (String col : cols) { | ||
filteredRelation = filteredRelation.dropColumn(col); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should move this logic into the directive. SparkSQLDataset should be returned from each Directive and passed onto the next one. SparkSQLEngine.transform should be invoked in each directive for executing.
Building just one SQL for each Directive should be good enough. SparkSQLEngine will internally chain the SQL execute in one map operation is my understanding, please check that part.
There is a comment on that already, https://github.com/cdapio/cdap/blob/1cc26e664a22977fb39f29ea03a46ee4ab531f92/cdap-app-templates/cdap-etl/hydrator-spark-core-base/src/main/java/io/cdap/cdap/etl/spark/batch/SparkSQLEngine.java#L193
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. These changes are for poc. will move the logic to directives.
@@ -73,6 +73,24 @@ | |||
} | |||
] | |||
}, | |||
{ | |||
"label": "RelationalDirectives", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to keep these changes by feature flag? Otherwise we have to move this code to a branch and keep syncing the branch with Develop or other future changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will add feature flag. until then we won't merge to develop
Column Transform directives
UI change wrangler
Directive validation
Transformation and Row Directives
No description provided.