diff --git a/applications/decentral_ml.md b/applications/decentral_ml.md
index 766b7ddd644..3d78feb4553 100644
--- a/applications/decentral_ml.md
+++ b/applications/decentral_ml.md
@@ -82,7 +82,7 @@ Nodes are rewarded for successfully improving the model. This happens at "Model
- Python/TensorFlow
- Rust/Substrate
- IPFS
-
+
### Ecosystem Fit
@@ -172,10 +172,8 @@ Here are key publications in the field of decentralised federated machine learni
| **0b.** | Documentation | We will provide both a **Gitbook with basic tutorial** and **inline documentation** of the code that explains how a user can (for example) upload and train a model, this will show how the federated machine learning functionality works. |
| **0c.** | Testing and Testing Guide | Unit test will comprehensively cover core functions ensuring both functionality and robustness. In the Gitbook, we will describe how to run these tests. |
| **0d.** | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | | |
-| 1. | Data Management Implementation | We will abstract the data management to support pluggable data storage implementations. We would like to cater with different decentralised storage types to test for update speed, retrieval and caching. We have thus far examined IPFS however would also investigate other potential solutions [Decentralised data options for polkadot](https://wiki.polkadot.network/docs/build-storage)|
-| 2. | Federated Learning Consensus | We will rewrite the core of the decentralised federated machine learning module. This will include examining two approaches both the [ProxyModel approach](https://github.com/layer6ai-labs/ProxyFL) and rexamining Google's implementation via [TensorFlow's Federated Learning](https://www.tensorflow.org/federated)|
-
-
+| 1. | Data Management Implementation | DecentralML, aims to establish a system of on-chain incentives and charges for the four key parties involved in the model's training and utilization. These parties are defined as:
i) Model Engineers: These are the data scientists, mathematicians, and computer scientists who develop and refine the models.
ii) Data Annotators: These users enrich the model by providing labels and annotations.
iii) Data Contributors: These users enhance the model by adding gradients.
iv) Clients: These are the licensors of the model, who may wish to use the model for commercial, contribution or educational purposes.
Our initial step in this process involves storing the "Master" model in a decentralized storage system. This DecentralML "PUT" method will be parameterized, allowing for the selection of different storage types (e.g., 1=IPFS, 2=Another storage type). We will abstract the data management to support pluggable data storage implementations and will implement IPFS initially. Note: In future, would like to support different decentralised storage types to test for update speed, retrieval and caching (see [Decentralised data options for polkadot](https://wiki.polkadot.network/docs/build-storage))
The "Model Creator" initates the upload by calling DecentralML methods using the Substrate Python Client library. Initially, uploading the "Master Model" and defining the "Training" parameters. These parameters include but are not limited to the following:
1. The staked pool payable amount sent by the "Model Creator" and stored on-chain as either DOT (or relevant compatible coin). These assets will ultimately be used to incentivize Data Contributors, Model Engineers, and Data Annotators.
2. The percentage of the pool allocated to the Data Contributors, Model Engineers, and Data Annotators.
3. The charges for Model Engineers to download the model.
4. The charges for Licensors of the model to download it.
These DecentralML parameters will be set using the Python client library by the "Model Creator". The method will return a a global unique identifier for the model.|
+| 2. | Federated Learning Consensus | We will write the core of rewarding "Data Contributors". Our focus will be on supporting Google's [TensorFlow's Federated Learning](https://www.tensorflow.org/federated) implementation, given its widespread client support and the substantial commercial funding it has received for development. However, we acknowledge the limitations of this approach, particularly in relation to the [ProxyModel approach](https://github.com/layer6ai-labs/ProxyFL), and we may consider modifying TensorFlow's core FL libraries in future releases to incorporate the ProxyModel approach. In terms of specific deliverables we plan on developing two DecentralML methods:
1. defineDataContributors([clientId], [walletaddress]): This method, called by the Model Creator (MC), identifies the "Data Contributors" who are eligible to train the model. The clientId is generated by the TensorFlow FL library. It is the MC's responsibility to manage and develop the relevant Client specific applications using TensorFL client libraries. This method is expected to be called before the "Client Selection" step outlined in the "Implementation > Federated Learning Consensus" section of this application. In this context, DecentralML serves as an auditing and reward mechanism for the Data Contributors.
2. rewardDataContributors([clientIdArray],[0-1]): This method, also called by the MC, rewards the Client for their data contribution. We anticipate this function being called after step 4, Model Aggregation, once the MC has determined a score (defined as the second parameter, ranging from 0-1). This score represents a percentage of the remaining "Data Contributors Pool" defined during the initial upload of the "Master Model". The method then allocates the assets to the Data Contributors' wallet. The advantage of this approach is that it requires minimal modifications to the TensorFlow FL library. Instead, we focus on rewarding and providing transparency for each set of gradients passed by the client, thereby incentivizing the client to contribute data to the model.
We will also add additional administrative methods as needed, such as the ability to upload the associated Client gradients and query the allocation on-chain.
### Milestone 2 — Collective Economy and Client Interface
@@ -191,5 +189,5 @@ Here are key publications in the field of decentralised federated machine learni
| **0b.** | Documentation | We will provide both a **Gitbook with basic tutorial** and **inline documentation** of the code that explains how a user can (for example) work with governance and stake against a model and how the client interface works. |
| **0c.** | Testing and Testing Guide | Comprehensive unit tests to ensure core functionality and robustness of code. Instructions on how to run the tests will be included. |
| **0d.** | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | | |
-| 1. | Collective Economy | We will implement governance pertaining to the election and training of models, as well as, parameter settings for training. This may include the use of the Collective pallet depending on how best to solve problems like jurisdiction, licensing, usage, as well as parameter settings for algorithm selection and so on. In addition, we plan to also implement a token staking to enable any XCM compatible token to be staked against the training of a submitted model.
-| 2. | Client Interface | The scope will be limited to extending Substrate's api-client libraries, We do not anticipate creating a UI for it rather focusing on a library integration for potential furture targets such as Jupyter Notebooks and other machine learning development tools. |
\ No newline at end of file
+| 1. | Collective Economy | We plan to establish governance mechanisms related to the selection and training of models, specifically for Data Annotators and Model Engineers.
For Model Engineers, we will implement the following logical methods:
1. listMasterModels: This method returns a report listing the modelGUID, modelName, usageType, usageTypeCost, and costTokenAccept.
2. getMasterModels(licenseUsage, quantityOfPaymentCoins): This method takes the type of usage and the payable amount for that usage, returning the master model. It operates on an element of trust, with users expected to pay the appropriate amount based on the associated licensing (MIT, Apache, etc. defined on-chain for the model). While we don't anticipate this being a problem for this grant, future releases may include more sophisticated whitelists or permissions.
3. listModelEngineers(modelGUID): This method returns a list of Model Engineer (ME) wallet addresses approved to call the collectiveApprovesModel method.
4. modelEngineerUpdate(modelGUID, model, senderWalletAddress): Anyone can send their version of the model, which will be stored on-chain for review and approval. We may add more permissions to this method but the idea is to keep it as open as possible.
4. listModelEngineerUpdates(modelGUID): This method returns a report listing the senderWalletAddress, model version, block number, and updateID.
5. collectiveApprovesModel(updateID, collective member or MC sender address, reward percentage:0-1): This method approves the model to replace the "Master" model and awards the Model Engineer a percentage (defined in the parameter 0-1) from the Model Engineer pool.
6. addCollectiveMember(modelGUID, walletAddress): This method adds collective members to the approval list so updates to the model by MEs can be approved. Future expansions may support issues like jurisdiction, licensing, usage, as well as parameter settings for algorithm selection and more.
For Data Annotators, we will implement similar logical methods:
1. uploadDataForAnnotation(image, text, sound, testQuestionnaire:questionText, answerType, questionId, answerPoints:numberPointsRewarded, batchParameters): This method allows collective members to provide data that requires annotation. The solution design records the answer types as columns and the questions as rows, enabling a wide variety of annotation questions to be modelled depending on the model requirement.
2. getDataAnnotationQuestionnaire(modelGUID): This method returns a list of required data annotations and associated questionnaire information with data types as columns, offering flexibility to build a wide variety of dApps that could harness and offer various rewards to DAs.
3. submitDataAnnotationForReward(modelGUID, questionId, answer): This method implements a simple validation test for the submitted annotated data, with the potential for more sophisticated game theory algorithms to validate DA submissions in the future.
4. reportDataAnnotationAwards(modelGUID): This method returns a report of pending and allocated rewards.
+| 2. | Client Interface | We will focus on utilising Substrate api-client libraries, particularly the Python library. We won't be developing a user interface for Data Annotators or any other specific targetted party. Instead, our attention will be on integrating a Python library, which could potentially be used with applications like Jupyter Notebooks and other machine learning tools in the future. It's important to note that we anticipate commerical "Clients" will use the 'getMasterModel' method, as outlined in the Collective Economy section above. |
\ No newline at end of file