- fixed typos and slightly improved language of the README files

- added README file to the examples directory
- updated the citation section in the main README
This commit is contained in:
Robert Gerstenberger 2023-08-21 09:23:12 +02:00 committed by Nils Blach
parent 41a0a6e67c
commit 2f4b828aef
4 changed files with 23 additions and 16 deletions

View File

@ -127,7 +127,9 @@ However, if you just want to inspect and replot the results, you can use the [pa
## Citations
Any published work which uses this software should include the following citation:
If you find this repository valuable, please give it a star!
Got any questions or feedback? Feel free to reach out to [nils.blach@inf.ethz.ch](mailto:nils.blach@inf.ethz.ch) or open an issue.
Using this in your work? Please reference us using the provided citation:
```bibtex
@misc{besta2023got,

5
examples/README.md Normal file
View File

@ -0,0 +1,5 @@
# Examples
This directory contains scripts for running various examples using the Graph of Thoughts package. Each script is a standalone Python program that sets up and runs a particular example.
Please refer to the individual example directories for more information on the specific example.

View File

@ -9,14 +9,14 @@ Currently, the framework supports the following LLMs:
- Llama-2 (Local - HuggingFace Transformers)
The following section describes how to instantiate individual LLMs and the Controller to run a defined GoO.
Furthermore, process of adding new LLM into the framework is outlined at the end.
Furthermore, the process of adding new LLMs into the framework is outlined at the end.
## LLM Instantiation
- Create a copy of `config_template.json` named `config.json`.
- Fill configuration details based on the used model (below).
### GPT-4 / GPT-3.5
- Adjust predefined `chatgpt`, `chatgpt4` or create new configuration with unique key.
- Adjust predefined `chatgpt`, `chatgpt4` or create new configuration with an unique key.
| Key | Value |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@ -38,8 +38,8 @@ lm = controller.ChatGPT(
```
### Llama-2
- Requires local hardware to run inference and HuggingFace account.
- Adjust predefined `llama7b-hf`, `llama13b-hf`, `llama70b-hf` or create new configuration with unique key.
- Requires local hardware to run inference and a HuggingFace account.
- Adjust predefined `llama7b-hf`, `llama13b-hf`, `llama70b-hf` or create a new configuration with an unique key.
| Key | Value |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@ -58,8 +58,8 @@ lm = controller.Llama2HF(
model_name=<configuration key>
)
```
- Request access to Llama-2 via [Meta form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) using same email address as for the HuggingFace account.
- After the access is granted, go to [HuggingFace Llama-2 model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), log in and accept licence (_"You have been granted access to this model"_ message should appear).
- Request access to Llama-2 via the [Meta form](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) using the same email address as for the HuggingFace account.
- After the access is granted, go to [HuggingFace Llama-2 model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), log in and accept the license (_"You have been granted access to this model"_ message should appear).
- Generate HuggingFace access token.
- Log in from CLI with: `huggingface-cli login --token <your token>`.
@ -114,4 +114,4 @@ def query(self, query: str, num_responses: int = 1) -> Any:
```
def get_response_texts(self, query_response: Union[List[Dict], Dict]) -> List[str]:
# Retrieve list of raw strings from the LLM response structure
```
```

View File

@ -37,7 +37,7 @@ Remember to set up the predecessors (and optionally successors) for your operati
## Available Operations
The following operations are available in the module:
**Score:** Collect all thoughts from preceeding operations and score them either using the LLM or a custom scoring function.
**Score:** Collect all thoughts from preceding operations and score them either using the LLM or a custom scoring function.
- num_samples (Optional): The number of samples to use for scoring, defaults to 1.
- combined_scoring (Optional): Whether to score all thoughts together in a single prompt or separately, defaults to False.
- scoring_function (Optional): A function that takes in a list of thought states and returns a list of scores for each thought.
@ -46,25 +46,25 @@ The following operations are available in the module:
- num_samples (Optional): The number of samples to use for validation, defaults to 1.
- improve (Optional): Whether to improve the thought if it is invalid, defaults to True.
- num_tries (Optional): The number of times to try improving the thought, before giving up, defaults to 3.
- validate_funtion (Optional): A function that takes in a thought state and returns a boolean indicating whether the thought is valid.
- validate_function (Optional): A function that takes in a thought state and returns a boolean indicating whether the thought is valid.
**Generate:** Generate new thoughts from the current thoughts. If no previous thoughts are available, the thoughts are initialized with the input to the [Controller](../controller/controller.py).
- num_branches_prompt (Optional): Number of responses that each prompt should generate (passed to prompter). Defaults to 1.
- num_branches_response (Optional): Number of responses the LM should generate for each prompt. Defaults to 1.
- num_branches_response (Optional): Number of responses the LLM should generate for each prompt. Defaults to 1.
**Improve:** Improve the current thoughts. This operation is similar to the ValidateAndImprove operation, but it does not validate the thoughts and always tries to improve them.
**Aggregate:** Aggregate the current thoughts into a single thought. This operation is useful when you want to combine multiple thoughts into a single thought.
- num_responses (Optional): Number of responses to request from the LLM (generates multiple new thoughts). Defaults to 1.
**KeepBestN:** Keep the best N thoughts from the preceeding thoughts. Assumes that the thoughts are already scored and throws an error if they are not.
**KeepBestN:** Keep the best N thoughts from the preceding thoughts. Assumes that the thoughts are already scored and throws an error if they are not.
- n: The number of thoughts to keep in order of score.
- higher_is_better (Optional): Whether higher scores are better (True) or lower scores are better (False). Defaults to True.
**KeepValid:** Keep only the valid thoughts from the preceeding thoughts. Assumes that each thought has already been validated, if not, it will be considered valid.
**KeepValid:** Keep only the valid thoughts from the preceding thoughts. Assumes that each thought has already been validated, if not, it will be considered valid.
**Selector:** Select a number of thoughts from the preceeding thoughts using a selection function. This is useful if subsequent operations should only be applied to a subset of the preceeding thoughts.
**Selector:** Select a number of thoughts from the preceding thoughts using a selection function. This is useful if subsequent operations should only be applied to a subset of the preceding thoughts.
- selector: A function that takes in a list of thoughts and returns a list of thoughts to select.
**GroundTruth**: Evaluates if the preceeding/current thoughts solve the problem and equal the ground truth. This operation is useful for terminating the graph and checking if the final thoughts solve the problem, but is only useful if the ground truth is known.
- ground_truth_evaluator: A function that takes in a thought state and returns a boolean indicating whether the thought solves the problem.
**GroundTruth**: Evaluates if the preceding/current thoughts solve the problem and equal the ground truth. This operation is useful for terminating the graph and checking if the final thoughts solve the problem, but is only useful if the ground truth is known.
- ground_truth_evaluator: A function that takes in a thought state and returns a boolean indicating whether the thought solves the problem.