-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPEN: FP integration #3
Conversation
Hi Alberto! Since this PR hasn't been merged in the GitLab repo, it's not really a reintegration. Could you make a more detailed description of the proposed feature? Additionally, I think it's best not to add the PULP-TrainLib kernel integration into this PR. |
Hi Victor, Sure, we can move the pulp-trainlib integration in a separate PR. I'll be more through on the contents and added features: Feature addition: Float Immediate abstract type To test the validity of this method, I've integrated the testImmediatePromotionFloat to the testTypes.py script, in which we test different immediate values to be represented in float32 and bfloat16 formats, which have been added as FloatImmediate extensions in Deeploy/CommonExtensions/DataTypes. If we want to add a new floating point precision format, we can simply define it here by specifying its total width, number of reserved bits for the fraction, and number of reserved bits for the exponent. After that, I've tested the correct integration of floats by testing a simple deployment of a dummy model doing an addition between two float tensors. The test model and data is defined in Test/FloatAdder. I've had to edit a number of files to enable float data types, including:
I've also added some minor edits to the documentation, as some line commands were incorrect or incomplete. If you have more in depth questions about the changes, let me know |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Alberto! I had a look at your proposed changes, and I think they are overall in okay shape. My main advise would be to rely on standard library functions for mantissa and exponent extraction (they have python bindings), as the current implementation is hard to follow and seems to be prone to edge cases.
class FloatImmediate(Immediate[Union[float, Iterable[float]], _ImmediateType]): | ||
typeFraction: int #: int: Represents the number of bits reserved for the fraction part | ||
typeExponent: int #: int: Represents the number of bits reserved for the exponent part | ||
signed: bool #: bool: Represents whether the underlying float is signed or unsigned (should be removed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is not needed, please remove it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, float numbers are all signed after all, I just put the bool here to make sure it wouldn't conflict with anything else in the framework
# The offset added to the exponent | ||
return 2**(cls.typeExponent - 1) - 1 | ||
|
||
# ADEQUINO: This is a ugly workaround for FP, works for bfloat16 and fp32 because bfloat16 is a truncated fp32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand this comment. What about the code is a workaround?
# Fraction binarization, fails if nbits required > n bits mantissa. | ||
# If integer part of immediate is 0, we start counting mantissa bits after we find the first 1 bit. | ||
if (int(integer) > 0): | ||
for i in range(cls.typeFraction): | ||
f = f * 2 | ||
f, fint = math.modf(f) | ||
binarylist.append(str(int(fint))) | ||
if f == 0: | ||
break | ||
elif i == (cls.typeFraction - 1): | ||
return False | ||
else: | ||
flag = 0 | ||
count = cls.typeFraction + 1 | ||
while (count): | ||
f = f * 2 | ||
f, fint = math.modf(f) | ||
binarylist.append(str(int(fint))) | ||
if int(fint) == 1 and flag == 0: | ||
flag = 1 | ||
if f == 0: | ||
break | ||
if flag == 1: | ||
count = count - 1 | ||
if (count == 0): | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this float to string to list to int casting seems unnecessary to me.
Please use a builtin method to determine the number of mantissa and exponent bits, e.g. frexp
(See here: https://www.tutorialspoint.com/c_standard_library/c_function_frexp.htm)
@@ -76,10 +76,27 @@ class uint64_t(IntegerImmediate): | |||
signed = False | |||
|
|||
|
|||
class bfloat16(FloatImmediate): | |||
typeName = "float16alt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to keep the typeName
equal to the name of the class to avoid confusion.
input_1_offset = 0 | ||
if hasattr(data_in_1, "_signed") and hasattr(data_in_1, "nLevels"): | ||
input_1_offset = (data_in_1._signed == 0) * int(data_in_1.nLevels / 2) | ||
input_2_offset = 0 | ||
if hasattr(data_in_2, "_signed") and hasattr(data_in_2, "nLevels"): | ||
input_2_offset = (data_in_2._signed == 0) * int(data_in_2.nLevels / 2) | ||
output_offset = 0 | ||
if hasattr(data_out, "_signed") and hasattr(data_out, "nLevels"): | ||
output_offset = -(data_out._signed == 0) * int(data_out.nLevels // 2) | ||
|
||
operatorRepresentation['offset'] = input_1_offset + input_2_offset + output_offset | ||
|
||
return ctxt, operatorRepresentation, [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand the use of nLevels
and offset
here, those abstractions don't seem to make sense to me for floating point arithmetic.
@@ -76,7 +76,7 @@ echo-bash: | |||
@echo "export MEMPOOL_HOME=${MEMPOOL_INSTALL_DIR}" | |||
@echo "export CMAKE=$$(which cmake)" | |||
@echo "export PATH=${QEMU_INSTALL_DIR}/bin:${BANSHEE_INSTALL_DIR}:\$$PATH" | |||
@echo "export PATH=~/.cargo/bin:$PATH" | |||
@echo "export PATH=~/.cargo/bin:\$$PATH" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this change necessary?
Closing as superseded by (and included into) #12 |
Future work: