-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CORE] Fix incorrect precision of decimal literal #6954
Conversation
Run Gluten Clickhouse CI |
8ba8a73
to
e32c494
Compare
Run Gluten Clickhouse CI |
@rui-mo Can you review this PR? |
Hi @jiangjiangtian, this adjustment, as I recall, is for the situation where one performs an arithmetic operation between a decimal and a number. In this instance, the number is converted to decimal and its precision and scale acquired from Spark are (38, 18), which are inconsistent with the real values. E.g., in the case you mentioned, Perhaps you could help confirm if it is the case for the example you provided. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix. Would you like to add the buggy case as a unit test?
@rui-mo Thanks! It seems that Spark doesn't have the logic. I don't know why Spark doesn't need this logic. In my case, the type of the literal |
Run Gluten Clickhouse CI |
I add a unit test. Is there anything that I need to add or edit? Thanks. |
Thanks for reminding me of this. I just remembered that this adjustment is typically for the arithmetic operation between a decimal and an integer/bigint. In your case, the literal is double, so to return the original precision and scale should be fine. |
withTable("test") { | ||
sql("create table test (col0 decimal(10, 0), col1 decimal(10, 0)) using parquet") | ||
sql("insert into test values (0, 0)") | ||
runQueryAndCompare("select col0 / (col1 + 1E-8) from test") { _ => } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a test failure as below:
Fix wrong rescale *** FAILED ***
org.apache.spark.sql.AnalysisException: unknown requires that the data to be inserted have the same number of columns as the target table: target table has 3 column(s) but the inserted data has 2 column(s), including 0 partition column(s) having constant value(s).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
4475838
to
9769e12
Compare
Run Gluten Clickhouse CI |
9769e12
to
33f2df9
Compare
Run Gluten Clickhouse CI |
For SQL as follows:
In this case, col0 and col1 are 0.
The result may be NULL. The reason is that the result of
Decimal(0.00000001).toString()
is "1E-8", which will make the new precision be 4.Therefore, we use
toPlainString
here to prevent scientific notation.