diff --git a/README.md b/README.md index bd2ba9a..edb4441 100644 --- a/README.md +++ b/README.md @@ -127,7 +127,9 @@ However, if you just want to inspect and replot the results, you can use the [pa ## Citations -Any published work which uses this software should include the following citation: +If you find this repository valuable, please give it a star! +Got any questions or feedback? Feel free to reach out to [nils.blach@inf.ethz.ch](mailto:nils.blach@inf.ethz.ch) or open an issue. +Using this in your work? Please reference us using the provided citation: ```bibtex @misc{besta2023got, diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000..e3aa875 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,5 @@ +# Examples + +This directory contains scripts for running various examples using the Graph of Thoughts package. Each script is a standalone Python program that sets up and runs a particular example. + +Please refer to the individual example directories for more information on the specific example. diff --git a/graph_of_thoughts/controller/README.md b/graph_of_thoughts/controller/README.md index afddcbf..b0f24dc 100644 --- a/graph_of_thoughts/controller/README.md +++ b/graph_of_thoughts/controller/README.md @@ -9,14 +9,14 @@ Currently, the framework supports the following LLMs: - Llama-2 (Local - HuggingFace Transformers) The following section describes how to instantiate individual LLMs and the Controller to run a defined GoO. -Furthermore, process of adding new LLM into the framework is outlined at the end. +Furthermore, the process of adding new LLMs into the framework is outlined at the end. ## LLM Instantiation - Create a copy of `config_template.json` named `config.json`. - Fill configuration details based on the used model (below). ### GPT-4 / GPT-3.5 -- Adjust predefined `chatgpt`, `chatgpt4` or create new configuration with unique key. +- Adjust predefined `chatgpt`, `chatgpt4` or create new configuration with an unique key. | Key | Value | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -38,8 +38,8 @@ lm = controller.ChatGPT( ``` ### Llama-2 -- Requires local hardware to run inference and HuggingFace account. -- Adjust predefined `llama7b-hf`, `llama13b-hf`, `llama70b-hf` or create new configuration with unique key. +- Requires local hardware to run inference and a HuggingFace account. +- Adjust predefined `llama7b-hf`, `llama13b-hf`, `llama70b-hf` or create a new configuration with an unique key. | Key | Value | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -58,8 +58,8 @@ lm = controller.Llama2HF( model_name= ) ``` -- Request access to Llama-2 via [Meta form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) using same email address as for the HuggingFace account. -- After the access is granted, go to [HuggingFace Llama-2 model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), log in and accept licence (_"You have been granted access to this model"_ message should appear). +- Request access to Llama-2 via the [Meta form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) using the same email address as for the HuggingFace account. +- After the access is granted, go to [HuggingFace Llama-2 model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), log in and accept the license (_"You have been granted access to this model"_ message should appear). - Generate HuggingFace access token. - Log in from CLI with: `huggingface-cli login --token `. @@ -114,4 +114,4 @@ def query(self, query: str, num_responses: int = 1) -> Any: ``` def get_response_texts(self, query_response: Union[List[Dict], Dict]) -> List[str]: # Retrieve list of raw strings from the LLM response structure -``` \ No newline at end of file +``` diff --git a/graph_of_thoughts/operations/README.md b/graph_of_thoughts/operations/README.md index fb3f8ff..68745ff 100644 --- a/graph_of_thoughts/operations/README.md +++ b/graph_of_thoughts/operations/README.md @@ -37,7 +37,7 @@ Remember to set up the predecessors (and optionally successors) for your operati ## Available Operations The following operations are available in the module: -**Score:** Collect all thoughts from preceeding operations and score them either using the LLM or a custom scoring function. +**Score:** Collect all thoughts from preceding operations and score them either using the LLM or a custom scoring function. - num_samples (Optional): The number of samples to use for scoring, defaults to 1. - combined_scoring (Optional): Whether to score all thoughts together in a single prompt or separately, defaults to False. - scoring_function (Optional): A function that takes in a list of thought states and returns a list of scores for each thought. @@ -46,25 +46,25 @@ The following operations are available in the module: - num_samples (Optional): The number of samples to use for validation, defaults to 1. - improve (Optional): Whether to improve the thought if it is invalid, defaults to True. - num_tries (Optional): The number of times to try improving the thought, before giving up, defaults to 3. -- validate_funtion (Optional): A function that takes in a thought state and returns a boolean indicating whether the thought is valid. +- validate_function (Optional): A function that takes in a thought state and returns a boolean indicating whether the thought is valid. **Generate:** Generate new thoughts from the current thoughts. If no previous thoughts are available, the thoughts are initialized with the input to the [Controller](../controller/controller.py). - num_branches_prompt (Optional): Number of responses that each prompt should generate (passed to prompter). Defaults to 1. -- num_branches_response (Optional): Number of responses the LM should generate for each prompt. Defaults to 1. +- num_branches_response (Optional): Number of responses the LLM should generate for each prompt. Defaults to 1. **Improve:** Improve the current thoughts. This operation is similar to the ValidateAndImprove operation, but it does not validate the thoughts and always tries to improve them. **Aggregate:** Aggregate the current thoughts into a single thought. This operation is useful when you want to combine multiple thoughts into a single thought. - num_responses (Optional): Number of responses to request from the LLM (generates multiple new thoughts). Defaults to 1. -**KeepBestN:** Keep the best N thoughts from the preceeding thoughts. Assumes that the thoughts are already scored and throws an error if they are not. +**KeepBestN:** Keep the best N thoughts from the preceding thoughts. Assumes that the thoughts are already scored and throws an error if they are not. - n: The number of thoughts to keep in order of score. - higher_is_better (Optional): Whether higher scores are better (True) or lower scores are better (False). Defaults to True. -**KeepValid:** Keep only the valid thoughts from the preceeding thoughts. Assumes that each thought has already been validated, if not, it will be considered valid. +**KeepValid:** Keep only the valid thoughts from the preceding thoughts. Assumes that each thought has already been validated, if not, it will be considered valid. -**Selector:** Select a number of thoughts from the preceeding thoughts using a selection function. This is useful if subsequent operations should only be applied to a subset of the preceeding thoughts. +**Selector:** Select a number of thoughts from the preceding thoughts using a selection function. This is useful if subsequent operations should only be applied to a subset of the preceding thoughts. - selector: A function that takes in a list of thoughts and returns a list of thoughts to select. -**GroundTruth**: Evaluates if the preceeding/current thoughts solve the problem and equal the ground truth. This operation is useful for terminating the graph and checking if the final thoughts solve the problem, but is only useful if the ground truth is known. -- ground_truth_evaluator: A function that takes in a thought state and returns a boolean indicating whether the thought solves the problem. \ No newline at end of file +**GroundTruth**: Evaluates if the preceding/current thoughts solve the problem and equal the ground truth. This operation is useful for terminating the graph and checking if the final thoughts solve the problem, but is only useful if the ground truth is known. +- ground_truth_evaluator: A function that takes in a thought state and returns a boolean indicating whether the thought solves the problem.