From 695d47fa6c60a8a479a316798b3b84228c6381f3 Mon Sep 17 00:00:00 2001
From: Tim Cowlishaw
- Wherever you see text that is✨ set in a light blue colour, preceded by the "sparkle" emoji, that text has been generated by an LLM. Such text will always be preceded by a notice linking to this page for further details. Any other content you see on this site has been written, by hand, by a subject-matter expert from the Distributed Design team. At present, LLM-generated text is only used in the responses generated for queries for custom themes, using the search box on the homepage. + Wherever you see text that is✨ set in a light blue colour, preceded by the "sparkle" emoji, that text has been generated by an LLM. Such text will always be preceded by a notice linking to this page for further details. Any other content you see on this site has been written, by hand, by a subject-matter expert from the Distributed Design Platform community. At present, LLM-generated text is only used in the responses generated for queries for custom themes, using the search box on the homepage.
LLM generated text always be an automatically generated summary of some content from within the Distributed Design Platform archive, and the sources used to generate it will always be linked and prominently referenced. @@ -26,7 +26,7 @@ Bear in mind that summaries are based on the information in the Distributed Design Platform archive, and as such, reflect the research, interests, and knowledge of our community in particular. Summaries for broad themes and topics will reflect our community's engagement with them, not give an entire overview of the field.
@@ -74,7 +74,7 @@
For those who want more technical details: our embeddings are created with the mistral-embed model, stored in an elasticsearch database, and similarity is measured by cosine distance between the query and the fragment. We use the mixtral-8x-22B large language model to generate summaries, and llama_index as plumbing for the whole system. The entire application is open source, and mistral-embed model, stored in an elasticsearch database, and similarity is measured by cosine distance between the query and the fragment. We use the mixtral-8x-22B large language model to generate summaries, and llama_index as plumbing for the whole system. The entire application is open source, and available on Github.