From 695d47fa6c60a8a479a316798b3b84228c6381f3 Mon Sep 17 00:00:00 2001 From: Tim Cowlishaw Date: Sun, 25 Aug 2024 08:00:48 +0200 Subject: [PATCH] about page copy changes --- ddlh/templates/pages/about_llms.j2 | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/ddlh/templates/pages/about_llms.j2 b/ddlh/templates/pages/about_llms.j2 index c96c7ce..cf1cbf7 100644 --- a/ddlh/templates/pages/about_llms.j2 +++ b/ddlh/templates/pages/about_llms.j2 @@ -8,7 +8,7 @@ The Distributed Design Learning Hub relies on both editorial content and curation, and the use of Large Language Models (LLMs) to help assist with discovering information and resources in the Distributed Design Platform archive.

- Wherever you see text that is set in a light blue colour, preceded by the "sparkle" emoji, that text has been generated by an LLM. Such text will always be preceded by a notice linking to this page for further details. Any other content you see on this site has been written, by hand, by a subject-matter expert from the Distributed Design team. At present, LLM-generated text is only used in the responses generated for queries for custom themes, using the search box on the homepage. + Wherever you see text that is set in a light blue colour, preceded by the "sparkle" emoji, that text has been generated by an LLM. Such text will always be preceded by a notice linking to this page for further details. Any other content you see on this site has been written, by hand, by a subject-matter expert from the Distributed Design Platform community. At present, LLM-generated text is only used in the responses generated for queries for custom themes, using the search box on the homepage.

LLM generated text always be an automatically generated summary of some content from within the Distributed Design Platform archive, and the sources used to generate it will always be linked and prominently referenced. @@ -26,7 +26,7 @@ Bear in mind that summaries are based on the information in the Distributed Design Platform archive, and as such, reflect the research, interests, and knowledge of our community in particular. Summaries for broad themes and topics will reflect our community's engagement with them, not give an entire overview of the field.

  • - Always verify factual claims, particularly those liable to change over time. Our archive contains the archive of outputs of over seven years of research, and a lot can change over that time. Please do further research to check that the summarised information is still current! + Always verify factual claims, particularly those liable to change over time. Our archive contains the archive of outputs of more than 10 years of research, and a lot can change over that time. Please do further research to check that the summarised information is still current!
  • Our commitment to responsible use of machine learning technologies

    @@ -42,7 +42,7 @@ Responsable, in that we take concrete steps to mitigate the social and environmental risks that LLMs and other machine learning technologies have the potential to present.
  • - Complemantary to our knowledge and skills, and those of the wider Distributed design community. Our use of LLMs is not intentded to replace or supplant domain knowledge or editorial skills, but to augment them. + Complemantary to our knowledge and skills, and those of the wider Distributed Design community. Our use of LLMs is not intentded to replace or supplant domain knowledge or editorial skills, but to augment them.
  • @@ -74,7 +74,7 @@

    For those who want more technical details: our embeddings are created with the mistral-embed model, stored in an elasticsearch database, and similarity is measured by cosine distance between the query and the fragment. We use the mixtral-8x-22B large language model to generate summaries, and llama_index as plumbing for the whole system. The entire application is open source, and mistral-embed model, stored in an elasticsearch database, and similarity is measured by cosine distance between the query and the fragment. We use the mixtral-8x-22B large language model to generate summaries, and llama_index as plumbing for the whole system. The entire application is open source, and available on Github.