Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let's parse those waveforms #226

Open
aceisace opened this issue Mar 26, 2023 · 49 comments
Open

Let's parse those waveforms #226

aceisace opened this issue Mar 26, 2023 · 49 comments
Assignees

Comments

@aceisace
Copy link

One of the bottlenecks of this library is the ability to parse vendor wavesforms correctly. The current approach is to convert a .wbf file to a json-like format with the mod from fried-ink, then to use the converter from this repo to convert the json to a header file.

As discussed with @martinberlin and @mcer12 and @vroland, a better approach to convert the waveforms is required. The idea is to have a single parser (preferrably in python) that is able to convert the waveforms from .wbf directly to the header format required by this lib.

After extensive hours of digging into waveforms, I have finally made a python-based waveform parser, that is able to parse the different modes and waveforms for each mode for each temperature-range. The data parsed seems to be correct. I want to share this parser as soon as it's possible to use vendor waveforms directly with this lib.

For that, however, I need help understanding the header file format. While I do consider myself an expert in python, I am by no means a C++ developer. Understanding complex code and c/cpp specific structures is a bit difficult. I need help understanding how to convert a mode and temperature-range specific waveform into the needed header file. So far, the help I have gotten was not sufficient, hence this issue on Github.

So far, my parser can extract the following data from a .wbf file:

{range(0, 3):
{'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 92},
range(3, 6): 
{'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 84}, 
range(6, 9): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]],
'length': 76}, range(9, 12): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'],
 'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 72}, 
range(12, 15): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 64}, 
range(15, 18): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 98}, 
range(18, 21): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 90}, 
range(21, 24): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 74}, 
range(24, 27): {
'waveform_hex': ['55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 74}, 
range(27, 30): {
'waveform_hex': ['55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 66},
range(30, 33): {
'waveform_hex': ['55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 58}, 
range(33, 38): {
'waveform_hex': ['55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '55', '0', 'aa', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 58}, 
range(38, 43): {
'waveform_hex': ['55', '55', '55', '0', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '0', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 50}, 
range(43, 48): {
'waveform_hex': ['55', '55', '55', '0', 'aa', 'aa', 'aa', '0', '0', '55', '55', '55', '0', 'aa', 'aa', 'aa', '0', 'ff', '1', 'b0', 'b0', '70', '40', '0', '0'], 
'phases': [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2], [0, 0, 0, 0], [3, 3, 3, 3], [1, 0, 0, 0], [0, 0, 3, 2], [0, 0, 3, 2], [0, 0, 3, 1], [0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0]], 'length': 50}}

The range specifies for which temperature range these waveforms were made for. These waveforms are for Mode 0 (init). The hex array is basically the exact data of the waveform. The phases are the binary representation of each hex.

My question is, how to get from this format into the required header file?

@aceisace
Copy link
Author

Waveform in json as returned by the inkwave mod:
sample waveform in json

@aceisace
Copy link
Author

aceisace commented Mar 30, 2023

Sharing the current version of my waveform parser based on python:
Python-based waveform parser

@aceisace
Copy link
Author

aceisace commented Apr 6, 2023

Referencing #132 as it seems related.
I can write an article about waveforms, but in a nutshell, it corresponds to this:

  • header file containing some data about the waveform
  • temperature ranges supported by the display
  • each temperature range has a separate waveform for each mode, e.g. update without refreshing, partial update etc.

Waveforms themselves contain phases, which contain the information about which voltage pattern needs to be applied to a given pixel to convert it from a known grayscale i to a different grayscale j.

In this case '0x55', '0x00', '0xaa' (section of a waveform)
The phases are [1, 1, 1, 1], [0, 0, 0, 0], [2, 2, 2, 2], which is basically the binary representation of the above mentioned hex numbers.

However, converting these phases into a format supported by epdiy is something not quite clear to me yet.

@martinberlin
Copy link
Collaborator

Interesting work here about waveforms.
I was reading about this and stumbled on this project NekoInk that describes IWF interchangeable Waveform Format maybe interesting to understand also?

@aceisace
Copy link
Author

aceisace commented May 9, 2023

There's quite a few articles about waveforms, each not being extensive and containing some differences, @martinberlin . What's clear is that the waveforms used to be embedded on chips of the flex-cable in older e-ink displays while now they are provided as .wbf files. These files are only available via signed NDAs with E-ink (or other hacks) so sharing them is a problem by itself. What's more is that the older waveforms had 3-bit lut, while most now have 4-bit luts and the latest ones (incl. the ones supporting colour use 5-bit waveforms). Previous versions had timings, newer ones have hex values with 0,1,2 (3).
What makes this parser a bit special is that it's in python, so more easy to maintain, easy to adapt, easy to read and doesn't contain too many lines (you should check the number of lines in inkwave..). If I can understand how to convert one waveform in the epdiy-format, we'll be able to use 4-bit vendor waveforms with ease directly from waveform files (effectively bypassing the .wrf format) . Any help with that would be much appreciated @vroland .

Assuming the most common use-case, a common 4-bit waveform, the waveform parser should work without an issue. In a nutshell, the waveform boils down to the following format:

HEADER...
TEMP-RANGE-1
    ...waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-2
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-3
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-4
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
....

and so on until the last temperature range.

@mcer12 Have you yet had a chance to test the parser with your extensive range of waveforms? 😄

@Hanley-Yao
Copy link

Thank you for sharing this code. It not only helps with the current project, but also provides great convenience for DIY e-ink projects. Your code can assist me in parsing wbf waveforms and using the results to drive an e-ink screen with an FPGA.

Thank you for continuing to improve the code despite your busy schedule. It would be great if you could create a new open-source project that implements similar functionality to this link: https://github.com/zephray/NekoInk/tree/master/waveform/gdew101_gd. ↗ This would allow us to convert wbf waveforms into CSV format for easier reading and modification. Thanks again!

@aceisace
Copy link
Author

@Hanley-Yao , you're welcome. This parser was written by me and as such, does not belong to epdiy or it's core contributors, but may be used for development as long as you publish any progress you have made back to me.

I am aware of zephray's nekoink and nekocal project, but there is a difference in the input waveform format. To be precise, nekoink's parser requires .fw or .iwf format files containing the actual waveforms, but most of the parallel display's recent waveforms are in .wbf format, which is pretty different and pretty much no source-code is (and likely won't be) available.
As such, compatibility between different waveform formats are still difficult to achieve. Once I have some better understanding of how to convert the raw data containing the waveform into a meaningful (and tested) format, I can adapt the code to output the waveform in a format like zephray's. Any help with understanding how to use the parsed data from my parser would be much appreciated.

@Hanley-Yao
Copy link

Hello, the analysis of the wbf waveform is confidential, but we can still obtain its analysis results through it8951. By directly burning the wbf waveform into it8951 and then refreshing a special image, we can use a logic analyzer to collect the corresponding waveform data. This process is somewhat troublesome. I will attempt to collect these waveforms and share both the wbf source file and the collected waveforms.

It seems that we can learn from the NXP forum (https://community.nxp.com/t5/i-MX-Processors/How-to-convert-wbf-waveform-file-to-wf-file/m-p/467926/highlight/true) ↗) that the wbf file contains raw waveform data, which may be compressed or encrypted. With a specific program, it can be converted into a wf format, which will increase memory usage. We cannot obtain the source code for this decompression or decryption method, but we may be able to reverse engineer it from the binary machine code in the it8951 firmware... However, the CPU instruction set of it8951 may be non-public, but I will share the wbf file and the firmware of it8951!

I am designing a PCB suitable for collecting the output signals of it8951. After the data collection and sorting, I will share it. Perhaps we can establish some mapping relationship between the wbf file and the collected data, so that we can parse these wbf files without going through it8951 (It's a pity that it seems that it8951 can only parse 4bit waveforms).

Thank you for your efforts! We can work together to tackle this challenge.
Please pay attention https://github.com/Hanley-Yao/WaveHack

@aceisace
Copy link
Author

Thanks for your reply, @Hanley-Yao . You're right, most of the work related to waveforms in NDA, but there are is a little bit of info available from the net. Having a logic analyser is pretty handy when trying to figure out how waveforms work. I do not have these tools myself, just a simple oscilloscope and a few other tools.

But the results from my parser look promising, as the checksums within the waveforms seem valid. To make a usable format form my parser, it's just a matter of converting this format:

[0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x00, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0x00, 0xFF, 0x01, 0xB0, 0xB0, 0x70, 0x40, 0x00, 0x00]

into something meaningful, e.g. the format suggested by zephray, with 0,1,2,3 (e.g. 0 -> no op, 1-> make lighter etc.)

I am looking forward to see the progress of your pcb to help analyse the waveforms as well as having some open-source way to convert the .wbf files into .wf files and then to csv.

I have a few vendor waveforms available I use for testing purposes. So far, my parser can work with both, 4 bit and 5-bit waveforms. If we know the (4-bit) input waveform in the it8951 chip, we can use the analysed data from your pcb to convert the current format into something similar to .wf or even directly to csv.

@Hanley-Yao
Copy link

Hanley-Yao commented Jul 18, 2023

Thank you for your reply, @aceisace .Zephray is an expert in this field. I also have some knowledge about waveform. Normally, if you need to execute the A2 waveform, you need to index the data through 1: current color, 2: target color, 3: frame number, and 4: temperature. We will get 2 bits of data, and generally, the interface width of the ink screen is 8 or 16 bits, where every 2 bits control one pixel. That is to say, 4 or 8 pixels of driving data can be sent in every effective clock cycle. The screen will determine whether the pixel is powered on according to the following action table:

Bit 1 Bit 0 Action
0 0 No action
0 1 Draw black
1 0 Draw white
1 1 No action

According to experience, when the temperature is set to 25 degrees Celsius, the A2 waveform is used. Whether the pixel is brushed from white to black or from black to white, or not brushed at all, it will take 10 waveform cycles. Assuming the pixel is initially white and the target is black, the "01" will be found from the waveform in the first frame to the tenth frame.

However, the A2 waveform only has two colors, while the GC16 has 16 colors. The data decoded by your program to some extent is available, so we should analyze the [0x55] obtained and convert it into binary [01,01,01,01]. But I cannot index the position of the corresponding color and target color. I have entrusted the factory to manufacture the PCB, and I hope to establish a mapping relationship between the data collected from IT8951 and the data decoded by your program, which may help me find the pattern and make your program more perfect!

Thank you for your efforts!

@aceisace
Copy link
Author

Thanks for your reply @Hanley-Yao . Thanks for the info about the waveform. You are right that the A2 waveform only uses two colours, while the GC16 can effectively display 16 grayscales, hence the lut for GC16 is essentially a 16x16xn matrix (excl. temperature), as we can go from any of the supported 16 grayscales to any other of the 16 grayscales. n is probably the phases needed to apply to get the desired target grayscale.

After checking a few 4-bit waveforms for mode gc16, I do not know how to further process the data. I'm looking forward to the pcb and it's results with the hope that the parser is then able to parse the extracted waveform data 💯

@Hanley-Yao
Copy link

Hello, @aceisace sorry for the wait. After relentless effort, I was able to refresh two special images [https://github.com/Hanley-Yao/WaveHack/tree/main/imghack] using the default waveform file in mode 2 (GC16) on it8951. I also used a logic analyzer to sample the output signal of it8951, including 14 temperature segments [https://github.com/Hanley-Yao/WaveHack/tree/main/analysis/waveshare_ed097tc2].

I am preparing to write code that can parse the csv file exported from the logic analyzer, which can convert csv into something similar to [https://github.com/zephray/NekoInk/tree/master/waveform] for human reading and modification.

Thank you for your attention!

@martinberlin martinberlin self-assigned this Aug 1, 2023
@aceisace
Copy link
Author

aceisace commented Aug 2, 2023

@Hanley-Yao Thanks for the update! Wow, being able to analyse the code this way is quite convenient. I was a little too busy last week fixing some code, but I managed to my hands on a .fw file generated from an original waveform. The only issue is that the generated csv files still require some pre-processing before they can be used at all with epdiy.
Hence, any code to parse these csv files would be much appreciated!

I'll share the csv files so you can also use them to help you with the waveforms 👍

@aceisace
Copy link
Author

Created a new repo for my python-waveform parser: Python waveform parser

@martinberlin
Copy link
Collaborator

Could not make it the 16 grayscakes work correctly in the 13.3 inch display. If someone has the right waveform for ED133UT2 would like to try this and see if I can get the grayscales rendered correctly.

@aceisace
Copy link
Author

In regards to the 13.3", this one is pretty weird compared to all other displays. None of the other waveforms are working on this one as they should. Furthermore, as the vendor waveforms for this one are in .wbf only and 4-bit, vroland's parser cannot parse them as it's designed to parse 5-bit only. On top of that, the difference between waveforms of this display within different batches can be significant. The result is a disorted image, lines caused by non-suitable waveforms and missing gray-levels.

Your best bet is to probably use the OC4-waveform. Perhaps you could ask NXP to convert the .wbf file into a .fw file, which can be converted to csv. But then again, yet another parser is required to convert it from csv to epdiy-format. This is what I'm working on. Since vroland is staying under the radar for quite some time now, even understanding the epdiy-waveform will take quite some time and effort.

@vroland
Copy link
Owner

vroland commented Aug 29, 2023

Hey, sorry, I was indeed away for a bit in summer and also focused on the V7 firmware. I can at least help you understand the epdiy waveforms :) Do you need help understanding the JSON intermediate format or the header file?
Regarding the waveform timings: This is a clutch for epdiy V1-V6 to use less cycles to draw an image. Normally, each frame that is sent to the display has exactly the same timing and only the direction of the voltage applied is different. To save some cycles when going directly from white or black, my idea was to modify the timing so one frame brings the particles exactly to the next gray level. The time is the high time of the CKV time in 10s of microseconds, which controls for how long the line driver is active. The timings do not come from a waveform file, I just made them up through experimentation. Hence in the parsed waveforms they are NULL.

@aceisace
Copy link
Author

Welcome back! Glad to have you back! Yes, please, I need help understanding the header file itself first. Thanks for the explanation about the timings too. To be more precise, to generate a usable header file, two steps are required:

  1. Decoding a waveform as it is found in the .wbf file into a more usable format which includes the phases for all 16 grayscales. This is something inkwave is able to do, but even after going through the code several times, I cannot figure out, how to parse the data inside a waveform for a given mode and temperature range. As I am planning to support not just epdiy-format, json will work just fine as a middle-step.

  2. Converting the parsed waveform from 1) into a suitable header file. This requires understanding the waveform header. What I could figure out is that the waveform header for a mode m and temperature range t consists of 16 columns with each 4 phases and a number of rows, which are fixed for a given waveform. I believe the 16 columns represent the 16 grayscales, but two questions arise: How does epdiy figure out which of the 4 voltages to apply by using the phases as the info in the phases only has hex-numbers, e.g. 0x45? As parallel e-paper displays heavily rely phases to change pixel values, the information of the current as well as the desired grayscale is required. So in the header file, how is a transition made, e.g. if my pixel has grayscale 0 (fully dark), how can I reach grayscale 4 on this pixel using the header?

@martinberlin
Copy link
Collaborator

Just a small short note about this @aceisace

How does epdiy figure out which of the 4 voltages

to drive the pixels only +15 / -15 is used. So they are just 2 voltages not 4 if I got this correctly. Here a nice explanation from the creative genius maker that drove this originally http://essentialscrap.com/eink/electronics.html

This is what the +-15 V voltages are used for: they are connected to the electrodes on the screen through some thin-film-transistor FETs.
Then the +22 V and -20 V voltages are used for driving these FETs. The transistors made using TFT technology are hardly ideal, and require quite a bit of voltage on the gate to fully switch. Therefore the gate driver requires these voltages which are larger than the voltages that will be applied to the FET's source.

@aceisace
Copy link
Author

Thanks for the info @martinberlin . Indeed, I did not know this information, but it makes sense. While I was testing, I hooked up an oscilloscope in amp-meter mode and found only a small current draw from the +20V/-22V lines, while the +15V/-15V were drawing much more current. That would also explain how it's possible to make a pixel darker or lighter.
From the CSV generated by Zephray's tool, there are only 3 valid operations, no-op, lighter, darker. With two voltages instead of 4, it would explain how this can be achieved. Thanks for solving this mistery!

What's left is to basically decode the waveform from the bytes found in the waveform into the epdiy header file, using json as an intermediate.

@vroland
Copy link
Owner

vroland commented Aug 31, 2023

@aceisace Maybe what got you confused is that to save space, I use 2 bits per table entry, not a whole byte.
So a 16x16 table for one phase becomes 16 x 4 bytes, because each of the 4 bytes has 4 sets of two bits, indicating a lighten, darken or no-op, see

def phase_to_c(phase):
.
So for a given phase / frame, if we want to go from gray level 5 to gray level 7, we have to look up in the 7th set of 4 bytes the least significant 2 bits of the 2nd byte. (Imagine the binary numbers written from left to right, then the 2 least significant are the leftmost bits).

@aceisace
Copy link
Author

Thank you very much for explaining @vroland ! So a phase (row) always consists of 16 "slots", where each slot has 4 bytes.
These four bytes actually represent tuples of 2 bits each, where these tuples represent what to do. This is just assumption, but 00 could be no-op, 01 is probably to make lighter and 10 is probably to make darker, while 11 is probably ignored or no-op again.
So basically, a row from zephray's csv format, which looks like this:
0,15,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,0 represents the operations needed to go from white to black are actually encoded in a column(?) in the epdiy header.
e.g. 2,2,0,0 would become 10, 10, 00, 00 in binary (10100000) which equals to 0xA0 in hex. Hence, 16 operations can be packed into 4 bytes. Do the remaining bytes get added in the next row, below the same column?

Now there is just two things left to do until I can finish my parser:

  1. I'm still not quite sure about the transitions from one grayscale to another. Could you explain it more concretely with this line:
    {{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x55},{0x15,0x55,0x55,0x54}}

  2. How does inkwave parse the waveform as it is found in the waveform file?

@vroland
Copy link
Owner

vroland commented Sep 2, 2023

Exactly!

For 1.:
For each pixel, we first choose the inner chunk according to the target grayscale. E.g., if we want to go from 2 to 9 (zero-based),
we first choose the 3rd 4-byte array from the outer array. Then, we look up the origin grayscale in the inner array:
Since the target is 9, we need the second operation tuple of the third byte, in this case 01. So this is the operation to push to the display for this particular pixel. We nee to do this for every pixel for every phase of the waveform. This is why it's so hard to make it fast on an ESP :D

  1. I can't really give you more insights on this, as I only extended the existing parser. I don't know much about the wbf format.

@aceisace
Copy link
Author

aceisace commented Sep 4, 2023

Thanks for the info, @vroland
So, a single outer array contains a single operation on how to go from any grayscale to any other grayscale (16*16 = 256 combinations). The next outer array contains the next operation to apply to this pixel. So if I format the outer arrays in a way they are exactly on top of each other, I just need to go down one line right?
e.g.
{0xC9, 0x55, 0xAA, 0xAA}
{0x93, 0x00, 0xAA, 0xAA}
Assuming the operation in question is the 1 byte's 4th operation (01), does that mean the next operation to apply is 11?

That leaves one more question though. For simplicities sake, let's assume the old grayscale value is always 0 (black). How do I find out the position of the hex-value is responsible for the desired target grayscale?

@vroland
Copy link
Owner

vroland commented Sep 4, 2023

I don't quite understand the question, what value is the "hex value" you want? You mean the 2-bit operation?

@aceisace
Copy link
Author

aceisace commented Sep 4, 2023

Yes, the two-bit operation within a byte. How can I change some values to improve the waveform, if I am always starting at black? Every row should contain the info about which operation needs to be applied (darker, lighter or no-op). Where exactly are the two-bit operation codes for going from a fully black pixel to any other grayscale within the outer arrays and how can I find this out myself?

For example, let's say I'm using GC-16 and all pixels are currently black. If the built-in waveform is not suitable due to a different lot, you could potentially modify the waveform to get more reasonable results. The question is finding out which of the hex values in the outer array I need to modify for this

@vroland
Copy link
Owner

vroland commented Sep 7, 2023

In that case you always look at the most significant two bits of the first inner array for every phase (If I remember correctly the "to" is the outer array, "from" the inner array"). So these you'd need to modify or add additional phases.

@aceisace
Copy link
Author

Mind if we can make a call sometime? I think this is a little too complicated to discuss in a few comments of a github issue.

Also, one more thing I realised (at least in the main branch). When the waveform does use timings, the first two inner arrays of every outer array are apparently ignored. When omitting timings, the first two inner arrays of every phase seem to make a difference. If this is how it should be, it's fine. Just making sure it's not a bug, as this is one of the reasons some (not all) 13.3" have rendering issues with the provided waveforms. Differences can be quite large between batches. What may work for one does not mean it'll work for others, sometimes even within the same batch and having the same LOT.

@vroland
Copy link
Owner

vroland commented Sep 14, 2023

We can probably find the time for a call, just send me an email.

I don't think that should be happening, not sure what you mean. Maybe it's easier to explain synchronously.

@aceisace
Copy link
Author

Sure, I'll get in contact via mail to schedule the call 👍

@eds123
Copy link

eds123 commented Oct 15, 2023

Any progress on this?

@aceisace
Copy link
Author

No satisfactory results yet, even with vendor waveforms, although they should have the best results. Just parsing the waveform alone does not seem to be enough to get a decent rendering 🙁
In regards to the parser, progress is still being done, although a little slower since I found out that original waveforms are in some cases giving less desirable results than the built-in ones. Furthermore, differences between different lots of the same display can be significant, sometimes even requiring tweaking the waveform by yourself

@eds123
Copy link

eds123 commented Oct 15, 2023

Ok, thanks for the update @aceisace.

I'm new to this community and just seeing where I can help... Waveforms seem like the biggest blocker right now. Let me know if you want help with coding and/or testing your parser, I'd love to contribute in some way.

It's a shame that Waveforms are having such varying results. Perhaps some kind of self calibration between a grayscale test and a config might be the best way to auto tweak Waveforms for varying display batches (further down the line).

@aceisace
Copy link
Author

You're welcome @eds123 ,
thanks for willing to contribute in such a complex and tedious topic. Indeed, if the waveforms could indeed be more usable then this project would really thrive. Pretty much everyone faces the bottleneck with the waveform after getting the hang of using the library.

Waveforms are actually tuned for each batch and each target application. In my case, I already tried more than 5 vendor waveforms and have tested all built-in ones. But the results are rather disappointing. Interestingly the vendor waveforms work with e-ink provided board, but somehow not with epdiy. At one point, I nearly gave up when I saw two vertical lines at a given position on the display on every image, blaming my poor pcb skills. But after months, I found out it was not the pcb, but only the waveform that was causing this issue in the first place. So waveforms can be pretty different for each batch and minor differences between batches are to be expected. Even worse, some of the converted vendor waveforms don't even give all 16 grayscales, but only 9, with the remaining ones taking one of the other tones. Even now, there is no machine that can generate waveforms on the fly as there are actually "bad" waveforms combinations too causing some weird behaviour. As a result, even vendor waveforms are created by people who do the tedious work of finding out which combinations of voltages (and in come cases, timings) work to get a certain grayscale from any other supported grayscale.

At the moment in time, solving the issue with the waveforms by creating a parser will not necessarily make the rendering in epdiy better. But once the mistery of the format is solved and we're able to convert one format to another, it could help improve the rendering quality significantly further down the line when the cause for this issue has been found and fixed.

My approach was to create a waveform parser based in python which others can contribute to with the aim of being able to convert the .wbf files into json/csv which is much easier to understand and modify. Furthermore, these can be converted to any other supported format e.g. the epdiy format with the same parser. This is because most parsers either do not work, are not maintained or a total mistery in and by itself, making it very difficult to work with waveforms with ease.

It is serious deep-diving with pretty much little to no support, but if you still want to help with waveforms, you can take a look at my repo for the python waveform parser. If you have a .wbf file, you can test the parser with breakpoints in an IDE to firstly understand the structure and header of the waveform file. After this, you will end up with a lot of hex values for each temperature range and mode. The 16-grayscale mode is called GC16, giving a rather lengthy array. The thing I am currently investigating is how this format is encrypted in the first place so that it can be decrypted. As waveforms essentially contain a 16x16 matrix with a varying number of "phases" needed to apply to go from any of the 16 grayscales to another supported grayscale, we need a way to decode the hex values so they can be converted to this 3D-array structure. This c-based parser can be used as a reference as it seems to be able to convert at least 5-bit waveforms correctly:
https://github.com/vroland/inkwave/blob/master/main.c#L571

If you find anything interesting, please let me know so this can be implemented 👍

@aceisace
Copy link
Author

Been some time @Hanley-Yao , hope you're well! Have you managed to make some progress with the waveforms so far?

@martinberlin martinberlin pinned this issue Nov 3, 2023
@mcer12
Copy link
Collaborator

mcer12 commented Jan 7, 2024

@aceisace Sorry I totally lost track of this issue. Will give the parser a try. I have quite a few waveforms pulled from the depths of the internets, it would definitely help some of my displays, especially ED097OC4 looks super washed out with default waveform.

@schuhumi
Copy link

schuhumi commented Jan 8, 2024

Hello people,

a couple weeks ago I received a v7 board from @martinberlin (thanks again!), and of course I eventually ran into the waveform limits. Hence I'm now in the weeds of this topic and would like to create some waveforms myself.

I'm able to interpret both epdiy-json and epdiy-c-header formats. This is what the builtin waveform looks like going from any source color ($c_s$) to target color $c_t=8$ with the GC16 mode ("Do arbitrary transitions by going to full black, then the desired value.") on ED133UT2, the dashed lines indicate the 30 individual parts:
any_to_8_GC16

Some things around timings I do not understand, and I'm trying to get my head around what @vroland wrote:

Regarding the waveform timings: This is a clutch for epdiy V1-V6 to use less cycles to draw an image. Normally, each frame that is sent to the display has exactly the same timing and only the direction of the voltage applied is different. To save some cycles when going directly from white or black, my idea was to modify the timing so one frame brings the particles exactly to the next gray level. The time is the high time of the CKV time in 10s of microseconds, which controls for how long the line driver is active. The timings do not come from a waveform file, I just made them up through experimentation. Hence in the parsed waveforms they are NULL.

  1. "microseconds", is this correct? As you can see in the plot, the timings of the waveform pieces ( these and these together) accumulate to 155µs, which is 0.155ms which is 0.000155s, which is way too fast? Is this supposed to be milliseconds?
  2. How exactly does this work on v7 / the lcd driver? Does this
    // high time for CKV in 1/10us.
    size_t pixel_clock; // = 12000000
    int ckv_high_time; // = 70
    mean that every piece is 7 microseconds? (milliseconds?) I.e. is this what a v7 board would to with that waveform?
    any_to_8_GC16_v7
  3. If 2. is correct, what happens with the direct update waveform? The timings are
    "ED133UT2": [100, 100, 100, 100]
    four times 100µs. Do these become 4x 7µs? That would be more than 14x shorter, which cannot be either?
  4. What are the waveform limits of the epdiy v7? I.e. how many pieces can a waveform have, and how short can they be?

Greetings and thanks for the amazing project!! :)
Simon

PS: I got a little stumped when parsing the waveform files, so for anyone scratching their heads it works like this:

  • actions: 0=no-op, 1=darker, 2=brighter (i.e. see the 2 * here for getting brighter until the desired value)
  • indexing: [piece, target_color, current_color], i.e. in the GC16 case the c-header works like:
const uint8_t epd_wp_epdiy_ED133UT2_2_0_data[30][16][4] = {
    {  // first waveform piece
        {0x00,0x00,0x00,0x01},  // to 0..
        {0x00,0x00,0x00,0x01},  // to 1..
        {0x00,0x00,0x00,0x01},  // ...
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {  // to 10 ...
            0x00,  // from 0-3
            0x00,  // from 4-7
            0x00,  // from 8-11
            0x01   // from 12-15 (12: 0b00, 13: 0b00, 14: 0b00, 15: 0b01)
        },
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01},
        {0x00,0x00,0x00,0x01}   // to 15
    },{{0x00,0x00,0x00,0x05},{0x00,0x00,0x00,0x05},{0x00,0x00,0x00.............

(What you can see is no matter the target color, when we start at 15 we need to start to darken the pixel immediately, because we have 15 waveform pieces to reach black, and from there on we can make it brighter to reach the target value)

  • epdiy's json format also creates 5 bit waveforms, which means there are two waveforms for every color transition. You can subsample to 4 bit but just taking the even entries.

@martinberlin
Copy link
Collaborator

martinberlin commented Jan 8, 2024

I’m not the right speak person about waveforms since I’m struggling with this myself.
But I can reply only of things I know more or less to be right:

  1. is correct, what happens with the direct update waveform? The timings are
    epdiy/scripts/epdiy_waveform_gen.py
    Line 42 in 4de1b51
    "ED133UT2": [100, 100, 100, 100]

If I understood this right, what Valentin explained, is that the v7 timings have always the same time (read his remark about the CKV timings)
Those different timings where applied in ESP32 boards (v2 to v6) for the clutch he mentioned in his remarks on this issue. @aceisace please correct me if I’m wrong since you’ve went much deeper than me in this issue.
@schuhumi very pleased that you are enjoying my last batch of v7 boards. Thanks a lot for the remark.

@vroland
Copy link
Owner

vroland commented Jan 15, 2024

Sorry for the confusion, and the late reply. I'm a bit busy currently, but clearing up these waveform problems is also on my list of things to do.
Regarding the line timing: The 6us (or whatever the exact timing for the display) is the high time for the Gate driver, meaning full clock cycle for a line is longer but the pixels are only driven during this high period. So for V7, there is 6us of high time plus whatever low time makes the frame timing work. But the high time is always fixed.
The old waveforms had a custom timing for this high time to drive the pixels shorter / longer, but this method is not as accurate as using the vendor waveforms with constant high time.

The problem here is that I haven't reverse engineered the vendor waveforms yet. So for many displays, we don't have good waveforms for V7, although when in doubt the ED047TC2 works reasonably.
We have the ED047TC2 waveform from LilyGo, which we can use as a reference. I also have a couple of others, like the ED097TC2, but I don't want to risk publishing them because they may be E-Ink Intellectual Property. However, by taking a good look at how they are constructed can probably give us a way to generate them procedurally from some tunable parameters for each display.
I hope it helped a little bit, I'll give some more details when I have more time :)

@schuhumi
Copy link

@martinberlin Yes to that degree I understand it, but I would like to go deeper and understand in detail at what point in time which electric field is applied to the pixel. (And yes I enjoy your v7 board very much! The only thing I'd change is make the green LED less bright :D)

@vroland Thank you very much for the clarifications! That is very helpful! Yes it finally dawned on me that the refresh takes several ms for the whole display because of the many pixel-rows, and not because a pixel needs several ms to get its particles moving.

Meanwhile I was able to create a waveform that works reasonably well for my application (epaper linux laptop), which works without flickering, has fast refresh, low ghosting and 2 bit color depth. And I think lowering the bar to waveform designing could alleviate the scarcity of well working waveforms in the project.

I can share some preliminary insights of that experience:

  • There's a very helpful book explaining how electrophoretic displays work, strategies for waveforms etc: https://onlinelibrary.wiley.com/doi/book/10.1002/9781119745624 (if you're a student: I got free access to the digital version through the university's library, the insitutional login on the webpage didn't work for me)

  • The mighty "DC balance" simply means that one should not accumulate too much charge on a pixel, which means overall you should not do more "brighter" operations than "darker" ones and vice versa (speaking for the constant timing case). I did it such that I made a table of the charge levels I want to maintain at each color, i.e. 0 for black, 1 for dark grey, 2 for light grey and 4 for white. Then in every transition I made sure I step up/down to the appropriate charge level from where ever I'm coming from, and that way no pixel can experience unbounded charge accumulation

  • A (kinda obvious but helpful) strategy to mostly end up at known grey levels is to use the "completely black" and "completely white" rails as guides. You can also see this in my plots above of the GC16 waveform, which use the "completely black" rail.

  • You can exploit the rails to cater for dc balance. Lets say you want to go from color 0 to color 1, have decided for their dc level to be 0 and 1 as well, but need two consecutive "brighter" (=2) commands to reach a satisfactory grey level. In this example you can use the "completely black" rail to buy some dc balance budget without the gray-level copying your every move, by doing a "darker" (=1) command beforehand. In practical terms, in a 4 pieces long waveform, instead of [2, 0, 0, 0] you would do [1, 2, 2, 0] (the "darker" and "brighter" command cancel out in terms of dc balance, so one "brighter" command is left that lifts you from dc level 0 to 1. But since in the first "darker" command the pixel cannot become much darker (we're starting at color 0), you can now make it more bright compared to doing just one single "brighter" command)

  • The particles dynamics are weird. For example, given the same initial conditions, the sequence [2, 2, 0, 2] can yield a very different grey level than [2, 2, 2, 0]. It is tedious to trial-and-error through the possible combinations, but you can exploit this behaviour to adjust at which grey level you're ending up without messing with your dc balance

  • Probably due to the particle dynamics, doing [2, 2, 2, 2] or doing [2, 2] and doubling ckv-high-time do absolutely not yield the same results. As I understand now there's quite the time-gap between the individual instructions, which makes these two variants yield different signals at the pixel.

  • When you use the built-in method in epdiy for clearing the display but working on a waveform that works without too much flashing, you'll absolutely run into ghosting. You simply cannot reach that crisp black and white levels without heavy flashing. So do not try to reach those levels without flashing, but instead rather clear the display with your own waveform, such that you have a realistic and constant black+white levels to match.

  • When you want quick refresh, make the waveforms as short as possible (=few pieces)

  • Get familiar with the epdiy_waveform_gen.py script. I also write the commands as nested lists like this:

    def generate_du4(display):
        """
        DC-Level (color -> level):
        0 -> 0
        1 -> 1
        2 -> 2
        3 -> 4
        """
    
        commands = [
            [  # from black (0) to
                [0, 0, 0, 0],  # black (0), dc balance stays the same
                [2, 1, 2, 0],  # dark grey (1), dc balance + 1
                [2, 2, 0, 0],  # light grey (2), dc balance + 2
                [2, 2, 2, 2]   # white (3), dc balance + 4
            ],
            [  # from 1 to
                [2, 1, 1, 0],
                [0, 0, 0, 0],
                [2, 0, 0, 0],
                [2, 2, 2, 0]
            ],
            [  # from 2 to
                [2, 1, 1, 1],
                [1, 0, 0, 0],
                [0, 0, 0, 0],
                [2, 2, 0, 0]
            ],
            [  # from white (3) to
                [1, 1, 1, 1],
                [1, 1, 1, 0],
                [1, 1, 0, 0],
                [0, 0, 0, 0]
            ]
        ]
    
        num_phases = len(commands[0][0])
        phases = []
        for frame in range(num_phases):
            def lutfunc(t, f):
                # one right-shift for mapping 5-bit to 4-bit;
                # two right-shifts for mapping 4-bit color depth to 2-bit
                return commands[int(f)>>3][int(t)>>3][frame]
    
            phase = generate_frame(lutfunc)
            phases.append(phase)
    
        return {
            "mode": mode_id("MODE_DU4"),
            "ranges": [
                {
                    "index": 0,
                    "phases": phases,
                    "phase_times": [60]*num_phases  # dummy values for <v7
                }
            ]
        }
  • Lastly, a gimmick: since you need to update the "current" nibbles yourself in the 1BPP-Difference buffer, you can do funky stuff. For example I split my waveform such that there's a dynamic mode first where anything with dark/light grey as target first gets moved to black/white instead. When updating the 1BPP-Difference buffer that can easily handled with if-branches. Then in a second static stage (i.e. when image updates like mouse cursor movement stop), the grey levels are applied. That makes for faster updates when scrolling / dragging windows etc, and reduces ghosting because maintaining low ghosting is easier with black+white only.

So yeah, sorry for the long post - as you can see waveform hacking is a bit of tedious endavour, but totally doable. Don't blame me if your display breaks, but read the book! 😅 Maybe we could even extend the epdiy_waveform_gen.py such that it checks for all waveforms that dc balance is maintained - just as a safeguard.

@vroland
Copy link
Owner

vroland commented Feb 7, 2024

@schuhumi Thanks for the detailed writeup, that's a lot of useful information in one place! I think it would be nice to eventually codify this knowledge into the waveform generator. The cherry on top would be to actually have a particle dynamics model where we can at least roughly predict the gray level for a sequence of moves depending on CKV high time, temperature and other factors. Then we can throw some compute at it to find the shortest waveform with the desired properties ;) Does the book you linked talk about how to model the fluid? I don't have access unfortunately.

@aceisace
Copy link
Author

aceisace commented Feb 7, 2024

This would work for most displays @vroland @schuhumi , however I can confirm the displays do not all work mathematically as I have several ones which behave the same way and yield unusable results with anything but the vendor waveform. However, even though I managed to convert this waveform to be usable with epdiy, the resulting rendering was off the charts in a bad way. Hardly any of the 16 grayscales were showing up correctly, there were several gray-tones missing. With the built-in epdiy waveforms, all of my first batch of displays had the same issue, several vertical lines, roughly 1-2cm thick which would remain on any and all rendered images on the same positions. This issue remained until I changed the waveform.

The issue is that changes on certain tones influence other tones too, most often negatively. Furthermore, the changes among even the same batch can sometimes be significant too, requiring further adjustments to the waveform. I would still suggest to prioritise the development of the universal waveform format using my waveform parser. However, from my findings, it seems that the waveform for a given mode and temperature range itself is also encoded. Hence, someone with deep understanding of cpp must take a deeper look into the current modified inkwave script in order to find out how the format is encoded.

The idea is that an existing vendor waveform can for once, be parsed, converted from one format to another and adjusted to suit the specific display as there is no universal waveform. Only when the hurdles of conversion are taken down can users share modifications which actually works with epdiy. As of now, this is a major bottleneck of epdiy

@schuhumi
Copy link

schuhumi commented Feb 9, 2024

@vroland Yes, I think so too. And also yes, the book has some descriptions about dynamics modelling. The problem with this is, that you'd need to determine the exact properties to fill in the variables and make the equations produce sensible results. These properties do not only vary with temperature and display model, but also among models (I have read). Fundamentally, you'd have to strap something like a webcam with fixed exposure over your display in a controlled lighting environment and do a series of waveforms and record the graylevel changes. At that point though, you could directly build a piece of software, that creates waveforms automatically. The book also talks about such a setup in chapter 3.1.4, you can find the paper here. I'm quite positive that something like this is doable. Also there's the screen_diag example in epdiy, which allows remote control of most drawing things already. If we made it possible to also dynamically load waveforms that way too, it would be possible to build a similar system.

@aceisace I suspect that something more than the waveform is funky in your setup. As you know they tell the sequence of electric fields to be applied for every possible color-transition, and therefore they work with any size of display and do not have anything to do with shapes, lines or positions. So things like wrong gray-tones stem from waveforms, but geometric things can not. What does the displays do with the stock epdiy waveforms? Gray-tones are expected to be off/missing, but it should not display any arbitrary lines and things like that. Otherwise, you could try to decrease the clock speed (bus_speed in your EpdDisplay_t struct to my understanding), just in case that this are artifacts caused by the display not being fast enough.

With the topic of universal waveform format, I'm not sure if you're aware but epdiy already also has a json based intermediate waveform format, that is quite nice to handle. scripts/epdiy_waveform_gen.py creates the stock waveforms in this format, and then scripts/waveform_hdrgen.py. So if your converter manages to output this json format, you do not have to bother with the c header files :) To me it reads like you where able to convert a vendor waveform and modify it even, I'm not sure what you mean by "it seems that the waveform for a given mode and temperature range itself is also encoded" - wouldn't you have been able to use the vendor waveform then?

Generally, from what I learned so far, I'm not sure how well vendor waveforms will work with epdiy, out of the box at least. As mentioned above, the data in the waveform only tells the sequence of electric fields to make a pixel darker/brighter for a given color transition. This sequence is not the signal at the pixel: There is the duration of how long the field should be active, which is not specified in the waveforms. Then there are durations of time between those darker/lighter commands, that stem from how epdiy works / how it pushes the data to the display. These will be different than the ones from the vendor's driver, yet have an impact on the resulting grey level.

The issue is that changes on certain tones influence other tones too, most often negatively.

Yes, absolutely! To be more precise, two gray-tones might look similar to the eye, but depending on how they came to be they might behave different in the next transition. I think that "hidden states" like these could be what's behind 5-bit waveforms: have an additional bit to discern these differences.

Posted from the e-paper computer :D

@aceisace
Copy link
Author

@schuhumi @vroland
Sorry for the late reply, just been a bit busy with work lately.

To be more precise, for the testing purposes, I was able to get my hands on a display directly from e-ink. As such, it should work with the given waveform. At the beginning, I thought the same i.e. my setup might be the issue. But I even went ahead to buy an oscilloscope to try finding the issue. I was unable to find any difference from the one from e-ink and the ones from other vendors. Although nothing was changed in the software or hardware, the only difference was that the display was different.

With the displays of e-ink, it was not possible to get rid of the two vertical sections, crossing from the top to the bottom. In fact, e-ink has different waveforms for even the same display, but different batches, hence, although the results might be close or similar, it is very unlikely to have a universal waveform that works for all.

The issue with the grayscales is also very unlikely related to maths and how the eye perceives those gray-shades, but rather intra-grayscale conflicts related to non-matched timing and/or unsupported waveforms for the specific display. Even with a decreased bus-speed, the issue remains and gets even worse. Changing the timings also results in similar issues as the timings have to carefully managed in order not to strain the display too long with positive or negative charge.

Nonetheless, I want to pursue the universal waveform format and the universal parser which I have developed as maybe, if not now, but later, this will be a great and powerful asset to harness the advantages and quality of these displays. Although waveforms themselves require nda, there are plenty on the web.

I've gotten to the point that I can extract the waveform segment, but this is in hex, image a long hex string (with varying length between each segment). The question is rather, how can we convert (or decode) this hex-string to something more useful e.g. json, which is much easier to work and convert to other formats, including epdiy. For this, as you have realised already, the modified version of inkwave and a cpp/c developer with solid understanding is needed in order to understand how inkwave parses this hex-segment. Unfortunately, I have not yet been able to find someone like this yet, but I hope that by making my parser open-source, someone will eventually pick this up.

@floers
Copy link

floers commented Sep 15, 2024

Hey, I want to get into this subject and help. Originally I wanted to create my own UI for the remarkable2 tablet. The wbf files are stored on the file system and are easy to access. There is a project that successfully parses these files and uses them to render. I started rewriting it in Rust (I can hardly read C++). The parsing part is done but not the rm2 specific rendering... But that's not interesting here anyway. Since the library works it seems to me that the author understood how to interpret the hex segments. My understanding is these hex numbers are the table lookup values. Though there are bytes in between that determine how often they must be repeated during parsing... well you can look at it.

My Rust translation is here: https://gitlab.com/floers/waved-rs.

I read this whole conversation twice (I also read the posts on essentialscrap) but I still have questions:

  1. Why are there 30 "waveform pieces"? Are these the vendor "modes" or the temperature ranges? If not how are those represented?
  2. I understood how we determine our "mode" (lighter, darker, noop). I am not sure whether I understood what is done with that:
  • We select the row to draw via gatedriver by writing 1 and n times 0 where n is our row?
  • Now what? We can only draw a whole row? so we have to draw res_w many values where each value had to be looked up in our waveform?
    Is this roughly correct?

@aceisace
Copy link
Author

aceisace commented Sep 17, 2024

@floers hi there and thanks for your interest!
I'm not the author of this project, however, I am the one behind the python-based wbf parser.

The author of this repo has not written a parser from scratch, but modified an existing one from here. The modified version by the author of this project can be found here. I've asked them for assistance with this, but even they have some struggle with it.

I am also not an expert in c/cpp, but I can write code up to medium complexity in cpp, though I specialise in python. You are right that most likely, these hex values itself are lookup tables, however, I do not yet understand how to get the phases out of them. If you have found out how to convert the hex values into phases, could you help me understand it so I can improve the current version? The thing is that these hex values should be encoded in such a way that given any grayscale value from 0 to 15, it should show the phases needed to get to all supported grayscales from 0 to 15, like a 16x16 matrix, but with phases being the third dimensions.

That part most likely exists in the original inkwave repo, but it's a fairly complex code and I have not yet understood that section of the code to allow converting the phases in a more useful format like json or .csv as suggested in this article for further conversion in various other formats.

Concerning your questions; I'll try my best to answer them;

Why are there 30 "waveform pieces"? Are these the vendor "modes" or the temperature ranges? If not how are those represented?
A .wbf file contains information about which voltage sequence (aka. phases) needs to be applied to go from a known grayscale value to all 16 grayscales. As e-ink works by applying voltages across oil with caged (pixels), with charged pigments, the behaviour (or more precisely, viscosity of the oil) changes with temperature. To get consistent results, the phases differ with temperature. Furthermore, modes are also possible.

In short, with a certain mode, you can sacrifice a lot of the grayscales and render essentially pure black and white pixels (mono), but get some significant speed. Other times, you may need the best quality (GC16-mode), at the cost of speed and flickering and longer updates. Then there's also GL16, which does the same, but without flickering.

Essentially, it boils down to 1) remembering the current state of the pixel (or perform a clear to make it all white) and 2) applying the phases for a given mode and temperature range on a specific pixel to get the desired grayscale.

Here's the breakdown of the wbf file:

HEADER...
TEMP-RANGE-1
    ...waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-2
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-3
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
TEMP-RANGE-4
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...
....

If, for example, we want the best rendering quality and assume that all pixels are white and that the temperature is currently 23 degrees, we'd firstly have to look for the temperature range in which this temperature is included. Let's say temp-range-3 is for 20-25 degrees, then our section of interest is now narrowed down to:

TEMP-RANGE-3
    ....waveform-for-mode-1...
    ...waveform-for-mode-2...
    ...waveform-for-mode-3...
    ...waveform-for-mode-4...

As we know from earlier, modes represent modes of updating pixels. Let's say mode-1 is for mono, mode-2 is for GC-16 (flickering update supporting 16 grayscales), then we have found our waveform ...waveform-for-mode-3...
which includes all phases for all 16 grayscale values. In short, the 3d-matrix. (16x16 of each grayscale, with the phases (0 for no-op, 1 for lighter, 2 for darker). From this point onward, it's a matter of applying said phases on every pixel to get the desired grayscale value.

I understood how we determine our "mode" (lighter, darker, noop). I am not sure whether I understood what is done with that:
We select the row to draw via gatedriver by writing 1 and n times 0 where n is our row?
Now what? We can only draw a whole row? so we have to draw res_w many values where each value had to be looked up in our waveform?
Is this roughly correct?

Part of the above explanation should have answered what is done with the phases. While I do not know or understand the low-level code of epdiy, it boils down to the following:

1) Remember the state of all pixels i.e. which grayscale value they currently have. For the sake of simplicity, we can assume to always start with all white
2) Apply phases according to the lookup table. Say, pixel (0,0) is now fully white. We want to make this pixel black. Then we'd use the 3d-matrix extracted from the waveform section and find out which phases need to be applied. For example, let's say, to go from white to black, we need to make it darker 5 times according to the phases. Then, for 5 time units (e.g. 15ms), we'd keep applying negative voltage. That way, we'd get black

Assuming the idea that the initial pixel state is always fully white, we now have a 2d-matrix, e.g.
(fictional lut)

initial state target state phases
0 15 2,2,2,2,2
0 14 0,2,2,2,2
0 13 0,0,2,2,2
0 12 0,0,0,2,2
0 11 0,0,0,0,2
0 10 0,1,0,2,
0 9 0,1,1,2
0 8 0, 1, 1, 1
0 7 0,1,1,1
0 6 0,0,2,0
0 5 0,0,0,2
0 4 0,0,1,0
0 3 0,0,0,1
0 2 0,0,1,1
0 1 0,0,0, 1

@floers
Copy link

floers commented Sep 17, 2024

Hey @aceisace. Thanks for your reply. I know about the waveform modes and what they are used for. I just wonder why there are 30 "waveform parts" (at least in one header waveform).

I mean e.g. this code:

const uint8_t epd_wp_epdiy_ED133UT2_2_0_data[30][16][4] = { ...

From what you explained and what I know from looking into waved (I forgot to link to it: https://github.com/matteodelabre/waved) I would expect a more map-like structure. Basically what you just wrote. But there are just 30 matrices that @schuhumi called waveform-parts and I am unsure of what they exatly refer to. In otherwords. What would it mean to take epd_wp_epdiy_ED133UT2_2_0_data[14]?

But from your answer I guess the 30 parts are just the 30 waveforms (N per temp per mode) in the order they were in the wbf file and I need to know the mode count and temp count to index them?

Regarding the meaning of the hex values. As far as I understood they can be interpreted this way:

https://gitlab.com/floers/waved-rs/-/blob/main/src/parser/waveform.rs?ref_type=heads#L305

@martinberlin
Copy link
Collaborator

In case this helps anyone I've started writing an article in the epdiy WiKi that covers with more detail how the eink panels are driven with V7. As soon as I research more, will add code snippets and examples there:

https://github.com/vroland/epdiy/wiki/How-pixels-are-driven-in-a-parallel-epaper-with-epdiy I don't know all the answers myself, so Valentin helped with some of them, in the Q&A section (Where I left his feedback almost intact)

@aceisace
Copy link
Author

aceisace commented Oct 7, 2024

@floers Apologies for the late reply, I got a bit too busy with work and some other projects.
Concerning the 30 parts, those are basically phases, i.e. 30 "steps" are required in total to go from any of the 16 grayscales to any other of the 16 grayscales.

Taking the epd_wp_epdiy_ED133UT2_2_0_data[14] means using the waveform for the 133UT2 display for mode 2 and temperature range 0 (or vice versa, I do not remember the order atm). This waveform itself has 14 steps, which is generally shorter than a vendor waveform for the same mode and temperature range. It's worth noting that the epdiy waveforms disregard the temperature anyways, and only vendor waveforms provide any reasonably good rendering results.

Thanks for the link about the hex values. I have my hands full atm, but I'll try to spare some time in the coming weekends for this to give it another shot. Meanwhile, if you have made more progress, please share it too.

Perhaps because of some speed issues (just a hunch, take it with a grain of salt), the rendering results even with vendor waveforms are actually not that good. For example, on a 13.3" UT1, the vendor waveform produced worse results than the epdiy one, since those are generally shorter and have timings while the vendor ones do not have timings for each phase, but use a constant time value for each phase. It could also have one or more different causes.

Thanks for the writeup and detailed documentation, @martinberlin !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants