change configs for new litellm backend
Browse files- CF_Code.yaml +16 -13
- CF_CodeCollab.yaml +1 -1
- CF_CodeCritic.yaml +15 -13
- CF_CodeCriticWrongAttempt.yaml +16 -13
- CF_CodeCriticWrongAttemptWithPlan.yaml +16 -13
- CF_CodeDebug.yaml +6 -1
- CF_CodeDebugCollab.yaml +7 -2
- CF_CodeDebugCollabWithPlan.yaml +7 -2
- CF_CodeWithPlan.yaml +15 -14
- CF_Plan.yaml +12 -12
- CF_PlanCollab.yaml +2 -2
- CF_PlanCritic.yaml +15 -13
- LC_Code.yaml +15 -12
- LC_CodeCollab.yaml +1 -1
- LC_CodeCritic.yaml +16 -12
- LC_CodeCriticWrongAttempt.yaml +15 -12
- LC_CodeDebug.yaml +6 -1
- LC_CodeDebugCollab.yaml +7 -2
- LC_CodeWithPlan.yaml +12 -7
- LC_Plan.yaml +13 -12
- LC_PlanCollab.yaml +2 -2
- LC_PlanCritic.yaml +13 -12
- __init__.py +1 -1
CF_Code.yaml
CHANGED
@@ -17,20 +17,23 @@ output_interface:
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
20 |
-
model_name: "gpt-4"
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
n: 1
|
24 |
max_tokens: 3000
|
25 |
temperature: 0.3
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
frequency_penalty: 0
|
30 |
-
presence_penalty: 0
|
31 |
|
32 |
system_message_prompt_template:
|
33 |
-
_target_:
|
34 |
template: |2-
|
35 |
Your goal is to provide executable Python code that solves a competitive programming problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
36 |
|
@@ -43,17 +46,17 @@ system_message_prompt_template:
|
|
43 |
|
44 |
The user will provide you with a task and an output format that you will strictly follow.
|
45 |
input_variables: []
|
46 |
-
|
47 |
|
48 |
human_message_prompt_template:
|
49 |
-
_target_:
|
50 |
template: "{{query}}"
|
51 |
input_variables:
|
52 |
- "query"
|
53 |
-
|
54 |
|
55 |
init_human_message_prompt_template:
|
56 |
-
_target_:
|
57 |
template: |2-
|
58 |
# Problem statement
|
59 |
{{problem_description}}
|
@@ -79,4 +82,4 @@ init_human_message_prompt_template:
|
|
79 |
- "io_examples_and_explanation"
|
80 |
partial_variables:
|
81 |
code_placeholder: "{{python_code}}"
|
82 |
-
|
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
|
|
20 |
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
wait_time_per_key: 6
|
25 |
+
model_name:
|
26 |
+
openai: "gpt-4"
|
27 |
+
azure: "azure/gpt-4"
|
28 |
n: 1
|
29 |
max_tokens: 3000
|
30 |
temperature: 0.3
|
31 |
+
top_p: 0.2
|
32 |
+
frequency_penalty: 0
|
33 |
+
presence_penalty: 0
|
|
|
|
|
34 |
|
35 |
system_message_prompt_template:
|
36 |
+
_target_: flows.prompt_template.JinjaPrompt
|
37 |
template: |2-
|
38 |
Your goal is to provide executable Python code that solves a competitive programming problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
39 |
|
|
|
46 |
|
47 |
The user will provide you with a task and an output format that you will strictly follow.
|
48 |
input_variables: []
|
49 |
+
|
50 |
|
51 |
human_message_prompt_template:
|
52 |
+
_target_: flows.prompt_template.JinjaPrompt
|
53 |
template: "{{query}}"
|
54 |
input_variables:
|
55 |
- "query"
|
56 |
+
|
57 |
|
58 |
init_human_message_prompt_template:
|
59 |
+
_target_: flows.prompt_template.JinjaPrompt
|
60 |
template: |2-
|
61 |
# Problem statement
|
62 |
{{problem_description}}
|
|
|
82 |
- "io_examples_and_explanation"
|
83 |
partial_variables:
|
84 |
code_placeholder: "{{python_code}}"
|
85 |
+
|
CF_CodeCollab.yaml
CHANGED
@@ -18,7 +18,7 @@ subflows_config:
|
|
18 |
_target_: .CF_Code.instantiate_from_default_config
|
19 |
name: "CodeGenerator"
|
20 |
human_message_prompt_template:
|
21 |
-
_target_:
|
22 |
template: |2-
|
23 |
# Feedback on the last proposed solution
|
24 |
{{code_feedback}}
|
|
|
18 |
_target_: .CF_Code.instantiate_from_default_config
|
19 |
name: "CodeGenerator"
|
20 |
human_message_prompt_template:
|
21 |
+
_target_: flows.prompt_template.JinjaPrompt
|
22 |
template: |2-
|
23 |
# Feedback on the last proposed solution
|
24 |
{{code_feedback}}
|
CF_CodeCritic.yaml
CHANGED
@@ -18,20 +18,22 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
24 |
n: 1
|
25 |
max_tokens: 3000
|
26 |
temperature: 0.3
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
presence_penalty: 0
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to identify potential issues with a competitive programming solution attempt.
|
37 |
|
@@ -46,17 +48,17 @@ system_message_prompt_template:
|
|
46 |
Crucially, your goal is to correctly identify potential issues with the solution attempt, and not to provide the code implementation yourself.
|
47 |
The user will provide you with a task and an output format that you will strictly follow.
|
48 |
input_variables: []
|
49 |
-
|
50 |
|
51 |
human_message_prompt_template:
|
52 |
-
_target_:
|
53 |
template: "{{query}}"
|
54 |
input_variables:
|
55 |
- "query"
|
56 |
-
|
57 |
|
58 |
init_human_message_prompt_template:
|
59 |
-
_target_:
|
60 |
template: |2-
|
61 |
# Problem statement
|
62 |
{{problem_description}}
|
@@ -82,4 +84,4 @@ init_human_message_prompt_template:
|
|
82 |
- "output_description"
|
83 |
- "io_examples_and_explanation"
|
84 |
- "code"
|
85 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
model_name:
|
25 |
+
openai: "gpt-4"
|
26 |
+
azure: "azure/gpt-4"
|
27 |
n: 1
|
28 |
max_tokens: 3000
|
29 |
temperature: 0.3
|
30 |
|
31 |
+
top_p: 0.2
|
32 |
+
frequency_penalty: 0
|
33 |
+
presence_penalty: 0
|
|
|
34 |
|
35 |
system_message_prompt_template:
|
36 |
+
_target_: flows.prompt_template.JinjaPrompt
|
37 |
template: |2-
|
38 |
Your goal is to identify potential issues with a competitive programming solution attempt.
|
39 |
|
|
|
48 |
Crucially, your goal is to correctly identify potential issues with the solution attempt, and not to provide the code implementation yourself.
|
49 |
The user will provide you with a task and an output format that you will strictly follow.
|
50 |
input_variables: []
|
51 |
+
|
52 |
|
53 |
human_message_prompt_template:
|
54 |
+
_target_: flows.prompt_template.JinjaPrompt
|
55 |
template: "{{query}}"
|
56 |
input_variables:
|
57 |
- "query"
|
58 |
+
|
59 |
|
60 |
init_human_message_prompt_template:
|
61 |
+
_target_: flows.prompt_template.JinjaPrompt
|
62 |
template: |2-
|
63 |
# Problem statement
|
64 |
{{problem_description}}
|
|
|
84 |
- "output_description"
|
85 |
- "io_examples_and_explanation"
|
86 |
- "code"
|
87 |
+
|
CF_CodeCriticWrongAttempt.yaml
CHANGED
@@ -19,20 +19,23 @@ output_interface:
|
|
19 |
- "api_output"
|
20 |
|
21 |
# ~~~ Flow specification ~~~
|
22 |
-
|
23 |
-
|
24 |
-
|
|
|
|
|
|
|
25 |
n: 1
|
26 |
max_tokens: 3000
|
27 |
temperature: 0.3
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
|
34 |
system_message_prompt_template:
|
35 |
-
_target_:
|
36 |
template: |2-
|
37 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
38 |
|
@@ -48,17 +51,17 @@ system_message_prompt_template:
|
|
48 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
49 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
50 |
input_variables: []
|
51 |
-
|
52 |
|
53 |
human_message_prompt_template:
|
54 |
-
_target_:
|
55 |
template: "{{query}}"
|
56 |
input_variables:
|
57 |
- "query"
|
58 |
-
|
59 |
|
60 |
init_human_message_prompt_template:
|
61 |
-
_target_:
|
62 |
template: |2-
|
63 |
# Problem statement
|
64 |
{{problem_description}}
|
@@ -87,4 +90,4 @@ init_human_message_prompt_template:
|
|
87 |
- "io_examples_and_explanation"
|
88 |
- "code"
|
89 |
- "testing_results_summary"
|
90 |
-
|
|
|
19 |
- "api_output"
|
20 |
|
21 |
# ~~~ Flow specification ~~~
|
22 |
+
backend:
|
23 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
24 |
+
api_infos: ${local.api_information}
|
25 |
+
model_name:
|
26 |
+
openai: "gpt-4"
|
27 |
+
azure: "azure/gpt-4"
|
28 |
n: 1
|
29 |
max_tokens: 3000
|
30 |
temperature: 0.3
|
31 |
|
32 |
+
|
33 |
+
top_p: 0.2
|
34 |
+
frequency_penalty: 0
|
35 |
+
presence_penalty: 0
|
36 |
|
37 |
system_message_prompt_template:
|
38 |
+
_target_: flows.prompt_template.JinjaPrompt
|
39 |
template: |2-
|
40 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
41 |
|
|
|
51 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
52 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
53 |
input_variables: []
|
54 |
+
|
55 |
|
56 |
human_message_prompt_template:
|
57 |
+
_target_: flows.prompt_template.JinjaPrompt
|
58 |
template: "{{query}}"
|
59 |
input_variables:
|
60 |
- "query"
|
61 |
+
|
62 |
|
63 |
init_human_message_prompt_template:
|
64 |
+
_target_: flows.prompt_template.JinjaPrompt
|
65 |
template: |2-
|
66 |
# Problem statement
|
67 |
{{problem_description}}
|
|
|
90 |
- "io_examples_and_explanation"
|
91 |
- "code"
|
92 |
- "testing_results_summary"
|
93 |
+
|
CF_CodeCriticWrongAttemptWithPlan.yaml
CHANGED
@@ -20,20 +20,23 @@ output_interface:
|
|
20 |
- "api_output"
|
21 |
|
22 |
# ~~~ Flow specification ~~~
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
26 |
n: 1
|
27 |
max_tokens: 3000
|
28 |
temperature: 0.3
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
|
35 |
system_message_prompt_template:
|
36 |
-
_target_:
|
37 |
template: |2-
|
38 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
39 |
|
@@ -51,17 +54,17 @@ system_message_prompt_template:
|
|
51 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Is the code implementation consistent with the conceptual solution? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
52 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
53 |
input_variables: []
|
54 |
-
|
55 |
|
56 |
human_message_prompt_template:
|
57 |
-
_target_:
|
58 |
template: "{{query}}"
|
59 |
input_variables:
|
60 |
- "query"
|
61 |
-
|
62 |
|
63 |
init_human_message_prompt_template:
|
64 |
-
_target_:
|
65 |
template: |2-
|
66 |
# Problem statement
|
67 |
{{problem_description}}
|
@@ -94,4 +97,4 @@ init_human_message_prompt_template:
|
|
94 |
- "plan"
|
95 |
- "code"
|
96 |
- "testing_results_summary"
|
97 |
-
|
|
|
20 |
- "api_output"
|
21 |
|
22 |
# ~~~ Flow specification ~~~
|
23 |
+
backend:
|
24 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
25 |
+
api_infos: ${local.api_information}
|
26 |
+
model_name:
|
27 |
+
openai: "gpt-4"
|
28 |
+
azure: "azure/gpt-4"
|
29 |
n: 1
|
30 |
max_tokens: 3000
|
31 |
temperature: 0.3
|
32 |
|
33 |
+
|
34 |
+
top_p: 0.2
|
35 |
+
frequency_penalty: 0
|
36 |
+
presence_penalty: 0
|
37 |
|
38 |
system_message_prompt_template:
|
39 |
+
_target_: flows.prompt_template.JinjaPrompt
|
40 |
template: |2-
|
41 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
42 |
|
|
|
54 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Is the code implementation consistent with the conceptual solution? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
55 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
56 |
input_variables: []
|
57 |
+
|
58 |
|
59 |
human_message_prompt_template:
|
60 |
+
_target_: flows.prompt_template.JinjaPrompt
|
61 |
template: "{{query}}"
|
62 |
input_variables:
|
63 |
- "query"
|
64 |
+
|
65 |
|
66 |
init_human_message_prompt_template:
|
67 |
+
_target_: flows.prompt_template.JinjaPrompt
|
68 |
template: |2-
|
69 |
# Problem statement
|
70 |
{{problem_description}}
|
|
|
97 |
- "plan"
|
98 |
- "code"
|
99 |
- "testing_results_summary"
|
100 |
+
|
CF_CodeDebug.yaml
CHANGED
@@ -22,7 +22,12 @@ subflows_config:
|
|
22 |
CodeGenerator:
|
23 |
_target_: .CF_Code.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
26 |
human_message_prompt_template:
|
27 |
template: |2-
|
28 |
{{testing_results_summary}}
|
|
|
22 |
CodeGenerator:
|
23 |
_target_: .CF_Code.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
+
backend:
|
26 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
27 |
+
api_infos: ${local.api_information}
|
28 |
+
model_name:
|
29 |
+
openai: "gpt-4"
|
30 |
+
azure: "azure/gpt-4"
|
31 |
human_message_prompt_template:
|
32 |
template: |2-
|
33 |
{{testing_results_summary}}
|
CF_CodeDebugCollab.yaml
CHANGED
@@ -21,9 +21,14 @@ subflows_config:
|
|
21 |
CodeGenerator:
|
22 |
_target_: .CF_Code.instantiate_from_default_config
|
23 |
name: "CodeGenerator"
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
25 |
human_message_prompt_template:
|
26 |
-
_target_:
|
27 |
template: |2-
|
28 |
{{testing_results_summary}}
|
29 |
|
|
|
21 |
CodeGenerator:
|
22 |
_target_: .CF_Code.instantiate_from_default_config
|
23 |
name: "CodeGenerator"
|
24 |
+
backend:
|
25 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
26 |
+
api_infos: ${local.api_information}
|
27 |
+
model_name:
|
28 |
+
openai: "gpt-4"
|
29 |
+
azure: "azure/gpt-4"
|
30 |
human_message_prompt_template:
|
31 |
+
_target_: flows.prompt_template.JinjaPrompt
|
32 |
template: |2-
|
33 |
{{testing_results_summary}}
|
34 |
|
CF_CodeDebugCollabWithPlan.yaml
CHANGED
@@ -22,9 +22,14 @@ subflows_config:
|
|
22 |
CodeGenerator:
|
23 |
_target_: .CF_CodeWithPlan.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
26 |
human_message_prompt_template:
|
27 |
-
_target_:
|
28 |
template: |2-
|
29 |
{{testing_results_summary}}
|
30 |
|
|
|
22 |
CodeGenerator:
|
23 |
_target_: .CF_CodeWithPlan.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
+
backend:
|
26 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
27 |
+
api_infos: ${local.api_information}
|
28 |
+
model_name:
|
29 |
+
openai: "gpt-4"
|
30 |
+
azure: "azure/gpt-4"
|
31 |
human_message_prompt_template:
|
32 |
+
_target_: flows.prompt_template.JinjaPrompt
|
33 |
template: |2-
|
34 |
{{testing_results_summary}}
|
35 |
|
CF_CodeWithPlan.yaml
CHANGED
@@ -18,20 +18,21 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
24 |
n: 1
|
25 |
max_tokens: 3000
|
26 |
temperature: 0.3
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
frequency_penalty: 0
|
31 |
-
presence_penalty: 0
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to provide executable Python code that solves a competitive programming problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
37 |
|
@@ -46,17 +47,17 @@ system_message_prompt_template:
|
|
46 |
|
47 |
The user will provide you with a task and an output format that you will strictly follow.
|
48 |
input_variables: []
|
49 |
-
|
50 |
|
51 |
human_message_prompt_template:
|
52 |
-
_target_:
|
53 |
template: "{{query}}"
|
54 |
input_variables:
|
55 |
- "query"
|
56 |
-
|
57 |
|
58 |
init_human_message_prompt_template:
|
59 |
-
_target_:
|
60 |
template: |2-
|
61 |
# Problem statement
|
62 |
{{problem_description}}
|
@@ -86,4 +87,4 @@ init_human_message_prompt_template:
|
|
86 |
- "plan"
|
87 |
partial_variables:
|
88 |
code_placeholder: "{{python_code}}"
|
89 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
model_name:
|
25 |
+
openai: "gpt-4"
|
26 |
+
azure: "azure/gpt-4"
|
27 |
n: 1
|
28 |
max_tokens: 3000
|
29 |
temperature: 0.3
|
30 |
+
top_p: 0.2
|
31 |
+
frequency_penalty: 0
|
32 |
+
presence_penalty: 0
|
|
|
|
|
33 |
|
34 |
system_message_prompt_template:
|
35 |
+
_target_: flows.prompt_template.JinjaPrompt
|
36 |
template: |2-
|
37 |
Your goal is to provide executable Python code that solves a competitive programming problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
38 |
|
|
|
47 |
|
48 |
The user will provide you with a task and an output format that you will strictly follow.
|
49 |
input_variables: []
|
50 |
+
|
51 |
|
52 |
human_message_prompt_template:
|
53 |
+
_target_: flows.prompt_template.JinjaPrompt
|
54 |
template: "{{query}}"
|
55 |
input_variables:
|
56 |
- "query"
|
57 |
+
|
58 |
|
59 |
init_human_message_prompt_template:
|
60 |
+
_target_: flows.prompt_template.JinjaPrompt
|
61 |
template: |2-
|
62 |
# Problem statement
|
63 |
{{problem_description}}
|
|
|
87 |
- "plan"
|
88 |
partial_variables:
|
89 |
code_placeholder: "{{python_code}}"
|
90 |
+
|
CF_Plan.yaml
CHANGED
@@ -17,19 +17,19 @@ output_interface:
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
20 |
-
model_name:
|
21 |
-
|
|
|
22 |
n: 1
|
23 |
max_tokens: 3000
|
24 |
temperature: 0.3
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
presence_penalty: 0
|
30 |
|
31 |
system_message_prompt_template:
|
32 |
-
_target_:
|
33 |
template: |2-
|
34 |
Your goal is to provide a high-level conceptual solution that, if implemented, will solve a given competitive programming problem.
|
35 |
|
@@ -44,17 +44,17 @@ system_message_prompt_template:
|
|
44 |
|
45 |
The user will provide you with a task and an output format that you will strictly follow.
|
46 |
input_variables: []
|
47 |
-
|
48 |
|
49 |
human_message_prompt_template:
|
50 |
-
_target_:
|
51 |
template: "{{query}}"
|
52 |
input_variables:
|
53 |
- "query"
|
54 |
-
|
55 |
|
56 |
init_human_message_prompt_template:
|
57 |
-
_target_:
|
58 |
template: |2-
|
59 |
# Problem statement
|
60 |
{{problem_description}}
|
@@ -79,4 +79,4 @@ init_human_message_prompt_template:
|
|
79 |
- "io_examples_and_explanation"
|
80 |
partial_variables:
|
81 |
plan_placeholder: "{{conceptual_solution}}"
|
82 |
-
|
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
20 |
+
model_name:
|
21 |
+
openai: "gpt-4"
|
22 |
+
azure: "azure/gpt-4"
|
23 |
n: 1
|
24 |
max_tokens: 3000
|
25 |
temperature: 0.3
|
26 |
|
27 |
+
top_p: 0.2
|
28 |
+
frequency_penalty: 0
|
29 |
+
presence_penalty: 0
|
|
|
30 |
|
31 |
system_message_prompt_template:
|
32 |
+
_target_: flows.prompt_template.JinjaPrompt
|
33 |
template: |2-
|
34 |
Your goal is to provide a high-level conceptual solution that, if implemented, will solve a given competitive programming problem.
|
35 |
|
|
|
44 |
|
45 |
The user will provide you with a task and an output format that you will strictly follow.
|
46 |
input_variables: []
|
47 |
+
|
48 |
|
49 |
human_message_prompt_template:
|
50 |
+
_target_: flows.prompt_template.JinjaPrompt
|
51 |
template: "{{query}}"
|
52 |
input_variables:
|
53 |
- "query"
|
54 |
+
|
55 |
|
56 |
init_human_message_prompt_template:
|
57 |
+
_target_: flows.prompt_template.JinjaPrompt
|
58 |
template: |2-
|
59 |
# Problem statement
|
60 |
{{problem_description}}
|
|
|
79 |
- "io_examples_and_explanation"
|
80 |
partial_variables:
|
81 |
plan_placeholder: "{{conceptual_solution}}"
|
82 |
+
|
CF_PlanCollab.yaml
CHANGED
@@ -21,7 +21,7 @@ subflows_config:
|
|
21 |
PlanGenerator:
|
22 |
_target_: .CF_Plan.instantiate_from_default_config
|
23 |
human_message_prompt_template:
|
24 |
-
_target_:
|
25 |
template: |2-
|
26 |
# Feedback on the last proposed conceptual solution
|
27 |
{{plan_feedback}}
|
@@ -36,7 +36,7 @@ subflows_config:
|
|
36 |
- plan_feedback
|
37 |
partial_variables:
|
38 |
plan_placeholder: "{{conceptual_solution}}"
|
39 |
-
|
40 |
input_interface_initialized:
|
41 |
- "plan_feedback"
|
42 |
PlanCritic:
|
|
|
21 |
PlanGenerator:
|
22 |
_target_: .CF_Plan.instantiate_from_default_config
|
23 |
human_message_prompt_template:
|
24 |
+
_target_: flows.prompt_template.JinjaPrompt
|
25 |
template: |2-
|
26 |
# Feedback on the last proposed conceptual solution
|
27 |
{{plan_feedback}}
|
|
|
36 |
- plan_feedback
|
37 |
partial_variables:
|
38 |
plan_placeholder: "{{conceptual_solution}}"
|
39 |
+
|
40 |
input_interface_initialized:
|
41 |
- "plan_feedback"
|
42 |
PlanCritic:
|
CF_PlanCritic.yaml
CHANGED
@@ -18,20 +18,22 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
24 |
n: 1
|
25 |
max_tokens: 3000
|
26 |
temperature: 0.3
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
presence_penalty: 0
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to identify potential issues with a conceptual solution to a given competitive programming problem.
|
37 |
|
@@ -47,17 +49,17 @@ system_message_prompt_template:
|
|
47 |
Some aspects to consider: Are there any logical mistakes with the proposed algorithm? Are the corner cases correctly handled?
|
48 |
The user will provide you with a task and an output format that you will strictly follow.
|
49 |
input_variables: []
|
50 |
-
|
51 |
|
52 |
human_message_prompt_template:
|
53 |
-
_target_:
|
54 |
template: "{{query}}"
|
55 |
input_variables:
|
56 |
- "query"
|
57 |
-
|
58 |
|
59 |
init_human_message_prompt_template:
|
60 |
-
_target_:
|
61 |
template: |2-
|
62 |
# Problem statement
|
63 |
{{problem_description}}
|
@@ -81,4 +83,4 @@ init_human_message_prompt_template:
|
|
81 |
- "output_description"
|
82 |
- "io_examples_and_explanation"
|
83 |
- "plan"
|
84 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
model_name:
|
25 |
+
openai: "gpt-4"
|
26 |
+
azure: "azure/gpt-4"
|
27 |
n: 1
|
28 |
max_tokens: 3000
|
29 |
temperature: 0.3
|
30 |
|
31 |
+
top_p: 0.2
|
32 |
+
frequency_penalty: 0
|
33 |
+
presence_penalty: 0
|
|
|
34 |
|
35 |
system_message_prompt_template:
|
36 |
+
_target_: flows.prompt_template.JinjaPrompt
|
37 |
template: |2-
|
38 |
Your goal is to identify potential issues with a conceptual solution to a given competitive programming problem.
|
39 |
|
|
|
49 |
Some aspects to consider: Are there any logical mistakes with the proposed algorithm? Are the corner cases correctly handled?
|
50 |
The user will provide you with a task and an output format that you will strictly follow.
|
51 |
input_variables: []
|
52 |
+
|
53 |
|
54 |
human_message_prompt_template:
|
55 |
+
_target_: flows.prompt_template.JinjaPrompt
|
56 |
template: "{{query}}"
|
57 |
input_variables:
|
58 |
- "query"
|
59 |
+
|
60 |
|
61 |
init_human_message_prompt_template:
|
62 |
+
_target_: flows.prompt_template.JinjaPrompt
|
63 |
template: |2-
|
64 |
# Problem statement
|
65 |
{{problem_description}}
|
|
|
83 |
- "output_description"
|
84 |
- "io_examples_and_explanation"
|
85 |
- "plan"
|
86 |
+
|
LC_Code.yaml
CHANGED
@@ -17,20 +17,23 @@ output_interface:
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
generation_parameters:
|
23 |
n: 1
|
24 |
max_tokens: 3000
|
25 |
temperature: 0.3
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
presence_penalty: 0
|
31 |
|
32 |
system_message_prompt_template:
|
33 |
-
_target_:
|
34 |
template: |2-
|
35 |
Your goal is to provide executable Python code that solves a coding interview problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
36 |
|
@@ -41,17 +44,17 @@ system_message_prompt_template:
|
|
41 |
|
42 |
The user will provide you with a task and an output format that you will strictly follow.
|
43 |
input_variables: []
|
44 |
-
|
45 |
|
46 |
human_message_prompt_template:
|
47 |
-
_target_:
|
48 |
template: "{{query}}"
|
49 |
input_variables:
|
50 |
- "query"
|
51 |
-
|
52 |
|
53 |
init_human_message_prompt_template:
|
54 |
-
_target_:
|
55 |
template: |2-
|
56 |
# Problem statement
|
57 |
{{problem_description}}
|
@@ -78,4 +81,4 @@ init_human_message_prompt_template:
|
|
78 |
- "python_stub"
|
79 |
partial_variables:
|
80 |
code_placeholder: "{{python_code}}"
|
81 |
-
|
|
|
17 |
- "api_output"
|
18 |
|
19 |
# ~~~ Flow specification ~~~
|
20 |
+
backend:
|
21 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
22 |
+
api_infos: ${local.api_information}
|
23 |
+
model_name:
|
24 |
+
openai: "gpt-4"
|
25 |
+
azure: "azure/gpt-4"
|
26 |
|
|
|
27 |
n: 1
|
28 |
max_tokens: 3000
|
29 |
temperature: 0.3
|
30 |
|
31 |
+
top_p: 0.2
|
32 |
+
frequency_penalty: 0
|
33 |
+
presence_penalty: 0
|
|
|
34 |
|
35 |
system_message_prompt_template:
|
36 |
+
_target_: flows.prompt_template.JinjaPrompt
|
37 |
template: |2-
|
38 |
Your goal is to provide executable Python code that solves a coding interview problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
39 |
|
|
|
44 |
|
45 |
The user will provide you with a task and an output format that you will strictly follow.
|
46 |
input_variables: []
|
47 |
+
|
48 |
|
49 |
human_message_prompt_template:
|
50 |
+
_target_: flows.prompt_template.JinjaPrompt
|
51 |
template: "{{query}}"
|
52 |
input_variables:
|
53 |
- "query"
|
54 |
+
|
55 |
|
56 |
init_human_message_prompt_template:
|
57 |
+
_target_: flows.prompt_template.JinjaPrompt
|
58 |
template: |2-
|
59 |
# Problem statement
|
60 |
{{problem_description}}
|
|
|
81 |
- "python_stub"
|
82 |
partial_variables:
|
83 |
code_placeholder: "{{python_code}}"
|
84 |
+
|
LC_CodeCollab.yaml
CHANGED
@@ -18,7 +18,7 @@ subflows_config:
|
|
18 |
_target_: .LC_Code.instantiate_from_default_config
|
19 |
name: "CodeGenerator"
|
20 |
human_message_prompt_template:
|
21 |
-
_target_:
|
22 |
template: |2-
|
23 |
# Feedback on the last proposed solution
|
24 |
{{code_feedback}}
|
|
|
18 |
_target_: .LC_Code.instantiate_from_default_config
|
19 |
name: "CodeGenerator"
|
20 |
human_message_prompt_template:
|
21 |
+
_target_: flows.prompt_template.JinjaPrompt
|
22 |
template: |2-
|
23 |
# Feedback on the last proposed solution
|
24 |
{{code_feedback}}
|
LC_CodeCritic.yaml
CHANGED
@@ -18,20 +18,24 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
-
generation_parameters:
|
24 |
n: 1
|
25 |
max_tokens: 3000
|
26 |
temperature: 0.3
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
presence_penalty: 0
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to identify potential issues with a competitive programming solution attempt.
|
37 |
|
@@ -45,17 +49,17 @@ system_message_prompt_template:
|
|
45 |
Crucially, your goal is to correctly identify potential issues with the solution attempt, and not to provide the code implementation yourself.
|
46 |
The user will provide you with a task and an output format that you will strictly follow.
|
47 |
input_variables: []
|
48 |
-
|
49 |
|
50 |
human_message_prompt_template:
|
51 |
-
_target_:
|
52 |
template: "{{query}}"
|
53 |
input_variables:
|
54 |
- "query"
|
55 |
-
|
56 |
|
57 |
init_human_message_prompt_template:
|
58 |
-
_target_:
|
59 |
template: |2-
|
60 |
# Problem statement
|
61 |
{{problem_description}}
|
@@ -83,4 +87,4 @@ init_human_message_prompt_template:
|
|
83 |
- "constraints"
|
84 |
- "python_stub"
|
85 |
- "code"
|
86 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
model_name:
|
25 |
+
openai: "gpt-4"
|
26 |
+
azure: "azure/gpt-4"
|
27 |
+
|
28 |
|
|
|
29 |
n: 1
|
30 |
max_tokens: 3000
|
31 |
temperature: 0.3
|
32 |
|
33 |
+
top_p: 0.2
|
34 |
+
frequency_penalty: 0
|
35 |
+
presence_penalty: 0
|
|
|
36 |
|
37 |
system_message_prompt_template:
|
38 |
+
_target_: flows.prompt_template.JinjaPrompt
|
39 |
template: |2-
|
40 |
Your goal is to identify potential issues with a competitive programming solution attempt.
|
41 |
|
|
|
49 |
Crucially, your goal is to correctly identify potential issues with the solution attempt, and not to provide the code implementation yourself.
|
50 |
The user will provide you with a task and an output format that you will strictly follow.
|
51 |
input_variables: []
|
52 |
+
|
53 |
|
54 |
human_message_prompt_template:
|
55 |
+
_target_: flows.prompt_template.JinjaPrompt
|
56 |
template: "{{query}}"
|
57 |
input_variables:
|
58 |
- "query"
|
59 |
+
|
60 |
|
61 |
init_human_message_prompt_template:
|
62 |
+
_target_: flows.prompt_template.JinjaPrompt
|
63 |
template: |2-
|
64 |
# Problem statement
|
65 |
{{problem_description}}
|
|
|
87 |
- "constraints"
|
88 |
- "python_stub"
|
89 |
- "code"
|
90 |
+
|
LC_CodeCriticWrongAttempt.yaml
CHANGED
@@ -19,20 +19,23 @@ output_interface:
|
|
19 |
- "api_output"
|
20 |
|
21 |
# ~~~ Flow specification ~~~
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
-
generation_parameters:
|
25 |
n: 1
|
26 |
max_tokens: 3000
|
27 |
temperature: 0.3
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
presence_penalty: 0
|
33 |
|
34 |
system_message_prompt_template:
|
35 |
-
_target_:
|
36 |
template: |2-
|
37 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
38 |
|
@@ -47,17 +50,17 @@ system_message_prompt_template:
|
|
47 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
48 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
49 |
input_variables: []
|
50 |
-
|
51 |
|
52 |
human_message_prompt_template:
|
53 |
-
_target_:
|
54 |
template: "{{query}}"
|
55 |
input_variables:
|
56 |
- "query"
|
57 |
-
|
58 |
|
59 |
init_human_message_prompt_template:
|
60 |
-
_target_:
|
61 |
template: |2-
|
62 |
# Problem statement
|
63 |
{{problem_description}}
|
@@ -88,4 +91,4 @@ init_human_message_prompt_template:
|
|
88 |
- "python_stub"
|
89 |
- "code"
|
90 |
- "testing_results_summary"
|
91 |
-
|
|
|
19 |
- "api_output"
|
20 |
|
21 |
# ~~~ Flow specification ~~~
|
22 |
+
backend:
|
23 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
24 |
+
api_infos: ${local.api_information}
|
25 |
+
model_name:
|
26 |
+
openai: "gpt-4"
|
27 |
+
azure: "azure/gpt-4"
|
28 |
|
|
|
29 |
n: 1
|
30 |
max_tokens: 3000
|
31 |
temperature: 0.3
|
32 |
|
33 |
+
top_p: 0.2
|
34 |
+
frequency_penalty: 0
|
35 |
+
presence_penalty: 0
|
|
|
36 |
|
37 |
system_message_prompt_template:
|
38 |
+
_target_: flows.prompt_template.JinjaPrompt
|
39 |
template: |2-
|
40 |
Your goal is to identify the issues with an incorrect competitive programming solution attempt.
|
41 |
|
|
|
50 |
Some aspects to consider: Is the input correctly parsed? Is the output correctly formatted? Are the corner cases correctly handled? Is there a logical mistake with the algorithm itself?
|
51 |
Use the code execution results provided in the issue description to guide your reasoning/debugging.
|
52 |
input_variables: []
|
53 |
+
|
54 |
|
55 |
human_message_prompt_template:
|
56 |
+
_target_: flows.prompt_template.JinjaPrompt
|
57 |
template: "{{query}}"
|
58 |
input_variables:
|
59 |
- "query"
|
60 |
+
|
61 |
|
62 |
init_human_message_prompt_template:
|
63 |
+
_target_: flows.prompt_template.JinjaPrompt
|
64 |
template: |2-
|
65 |
# Problem statement
|
66 |
{{problem_description}}
|
|
|
91 |
- "python_stub"
|
92 |
- "code"
|
93 |
- "testing_results_summary"
|
94 |
+
|
LC_CodeDebug.yaml
CHANGED
@@ -22,7 +22,12 @@ subflows_config:
|
|
22 |
CodeGenerator:
|
23 |
_target_: .LC_Code.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
26 |
human_message_prompt_template:
|
27 |
template: |2-
|
28 |
{{testing_results_summary}}
|
|
|
22 |
CodeGenerator:
|
23 |
_target_: .LC_Code.instantiate_from_default_config
|
24 |
name: "CodeGenerator"
|
25 |
+
backend:
|
26 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
27 |
+
api_infos: ${local.api_information}
|
28 |
+
model_name:
|
29 |
+
openai: "gpt-4"
|
30 |
+
azure: "azure/gpt-4"
|
31 |
human_message_prompt_template:
|
32 |
template: |2-
|
33 |
{{testing_results_summary}}
|
LC_CodeDebugCollab.yaml
CHANGED
@@ -21,9 +21,14 @@ subflows_config:
|
|
21 |
CodeGenerator:
|
22 |
_target_: .LC_Code.instantiate_from_default_config
|
23 |
name: "CodeGenerator"
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
25 |
human_message_prompt_template:
|
26 |
-
_target_:
|
27 |
template: |2-
|
28 |
{{testing_results_summary}}
|
29 |
|
|
|
21 |
CodeGenerator:
|
22 |
_target_: .LC_Code.instantiate_from_default_config
|
23 |
name: "CodeGenerator"
|
24 |
+
backend:
|
25 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
26 |
+
api_infos: ${local.api_information}
|
27 |
+
model_name:
|
28 |
+
openai: "gpt-4"
|
29 |
+
azure: "azure/gpt-4"
|
30 |
human_message_prompt_template:
|
31 |
+
_target_: flows.prompt_template.JinjaPrompt
|
32 |
template: |2-
|
33 |
{{testing_results_summary}}
|
34 |
|
LC_CodeWithPlan.yaml
CHANGED
@@ -18,7 +18,12 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
generation_parameters:
|
24 |
n: 1
|
@@ -31,7 +36,7 @@ generation_parameters:
|
|
31 |
presence_penalty: 0
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to provide executable Python code that solves a coding interview problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
37 |
|
@@ -44,17 +49,17 @@ system_message_prompt_template:
|
|
44 |
|
45 |
The user will provide you with a task and an output format that you will strictly follow.
|
46 |
input_variables: []
|
47 |
-
|
48 |
|
49 |
human_message_prompt_template:
|
50 |
-
_target_:
|
51 |
template: "{{query}}"
|
52 |
input_variables:
|
53 |
- "query"
|
54 |
-
|
55 |
|
56 |
init_human_message_prompt_template:
|
57 |
-
_target_:
|
58 |
template: |2-
|
59 |
# Problem statement
|
60 |
{{problem_description}}
|
@@ -87,4 +92,4 @@ init_human_message_prompt_template:
|
|
87 |
- "plan"
|
88 |
partial_variables:
|
89 |
code_placeholder: "{{python_code}}"
|
90 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
backend:
|
22 |
+
__target__: flows.backends.llm_lite.LiteLLMBackend
|
23 |
+
api_infos: ${local.api_information}
|
24 |
+
model_name:
|
25 |
+
openai: "gpt-4"
|
26 |
+
azure: "azure/gpt-4"
|
27 |
|
28 |
generation_parameters:
|
29 |
n: 1
|
|
|
36 |
presence_penalty: 0
|
37 |
|
38 |
system_message_prompt_template:
|
39 |
+
_target_: flows.prompt_template.JinjaPrompt
|
40 |
template: |2-
|
41 |
Your goal is to provide executable Python code that solves a coding interview problem. The code should correctly handle all corner cases in order to pass the hidden test cases, which are used to evaluate the correctness of the solution.
|
42 |
|
|
|
49 |
|
50 |
The user will provide you with a task and an output format that you will strictly follow.
|
51 |
input_variables: []
|
52 |
+
|
53 |
|
54 |
human_message_prompt_template:
|
55 |
+
_target_: flows.prompt_template.JinjaPrompt
|
56 |
template: "{{query}}"
|
57 |
input_variables:
|
58 |
- "query"
|
59 |
+
|
60 |
|
61 |
init_human_message_prompt_template:
|
62 |
+
_target_: flows.prompt_template.JinjaPrompt
|
63 |
template: |2-
|
64 |
# Problem statement
|
65 |
{{problem_description}}
|
|
|
92 |
- "plan"
|
93 |
partial_variables:
|
94 |
code_placeholder: "{{python_code}}"
|
95 |
+
|
LC_Plan.yaml
CHANGED
@@ -16,19 +16,20 @@ output_interface:
|
|
16 |
- "api_output"
|
17 |
|
18 |
# ~~~ Flow specification ~~~
|
19 |
-
model_name:
|
20 |
-
|
|
|
|
|
21 |
n: 1
|
22 |
max_tokens: 3000
|
23 |
temperature: 0.3
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
presence_penalty: 0
|
29 |
|
30 |
system_message_prompt_template:
|
31 |
-
_target_:
|
32 |
template: |2-
|
33 |
Your goal is to provide a high-level conceptual solution that, if implemented, will solve a given coding interview problem.
|
34 |
|
@@ -41,17 +42,17 @@ system_message_prompt_template:
|
|
41 |
|
42 |
The user will provide you with a task and an output format that you will strictly follow.
|
43 |
input_variables: []
|
44 |
-
|
45 |
|
46 |
human_message_prompt_template:
|
47 |
-
_target_:
|
48 |
template: "{{query}}"
|
49 |
input_variables:
|
50 |
- "query"
|
51 |
-
|
52 |
|
53 |
init_human_message_prompt_template:
|
54 |
-
_target_:
|
55 |
template: |2-
|
56 |
# Problem statement
|
57 |
{{problem_description}}
|
@@ -72,5 +73,5 @@ init_human_message_prompt_template:
|
|
72 |
- "constraints"
|
73 |
partial_variables:
|
74 |
plan_placeholder: "{{conceptual_solution}}"
|
75 |
-
|
76 |
|
|
|
16 |
- "api_output"
|
17 |
|
18 |
# ~~~ Flow specification ~~~
|
19 |
+
model_name:
|
20 |
+
openai: "gpt-4"
|
21 |
+
azure: "azure/gpt-4"
|
22 |
+
|
23 |
n: 1
|
24 |
max_tokens: 3000
|
25 |
temperature: 0.3
|
26 |
|
27 |
+
top_p: 0.2
|
28 |
+
frequency_penalty: 0
|
29 |
+
presence_penalty: 0
|
|
|
30 |
|
31 |
system_message_prompt_template:
|
32 |
+
_target_: flows.prompt_template.JinjaPrompt
|
33 |
template: |2-
|
34 |
Your goal is to provide a high-level conceptual solution that, if implemented, will solve a given coding interview problem.
|
35 |
|
|
|
42 |
|
43 |
The user will provide you with a task and an output format that you will strictly follow.
|
44 |
input_variables: []
|
45 |
+
|
46 |
|
47 |
human_message_prompt_template:
|
48 |
+
_target_: flows.prompt_template.JinjaPrompt
|
49 |
template: "{{query}}"
|
50 |
input_variables:
|
51 |
- "query"
|
52 |
+
|
53 |
|
54 |
init_human_message_prompt_template:
|
55 |
+
_target_: flows.prompt_template.JinjaPrompt
|
56 |
template: |2-
|
57 |
# Problem statement
|
58 |
{{problem_description}}
|
|
|
73 |
- "constraints"
|
74 |
partial_variables:
|
75 |
plan_placeholder: "{{conceptual_solution}}"
|
76 |
+
|
77 |
|
LC_PlanCollab.yaml
CHANGED
@@ -21,7 +21,7 @@ subflows_config:
|
|
21 |
PlanGenerator:
|
22 |
_target_: .LC_Plan.instantiate_from_default_config
|
23 |
human_message_prompt_template:
|
24 |
-
_target_:
|
25 |
template: |2-
|
26 |
# Feedback on the last proposed conceptual solution
|
27 |
{{plan_feedback}}
|
@@ -36,7 +36,7 @@ subflows_config:
|
|
36 |
- plan_feedback
|
37 |
partial_variables:
|
38 |
plan_placeholder: "{{conceptual_solution}}"
|
39 |
-
|
40 |
input_interface_initialized:
|
41 |
- "plan_feedback"
|
42 |
PlanCritic:
|
|
|
21 |
PlanGenerator:
|
22 |
_target_: .LC_Plan.instantiate_from_default_config
|
23 |
human_message_prompt_template:
|
24 |
+
_target_: flows.prompt_template.JinjaPrompt
|
25 |
template: |2-
|
26 |
# Feedback on the last proposed conceptual solution
|
27 |
{{plan_feedback}}
|
|
|
36 |
- plan_feedback
|
37 |
partial_variables:
|
38 |
plan_placeholder: "{{conceptual_solution}}"
|
39 |
+
|
40 |
input_interface_initialized:
|
41 |
- "plan_feedback"
|
42 |
PlanCritic:
|
LC_PlanCritic.yaml
CHANGED
@@ -18,20 +18,21 @@ output_interface:
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
-
model_name:
|
|
|
|
|
22 |
|
23 |
-
generation_parameters:
|
24 |
n: 1
|
25 |
max_tokens: 3000
|
26 |
temperature: 0.3
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
|
33 |
system_message_prompt_template:
|
34 |
-
_target_:
|
35 |
template: |2-
|
36 |
Your goal is to identify potential issues with a conceptual solution to a given competitive programming problem.
|
37 |
|
@@ -45,17 +46,17 @@ system_message_prompt_template:
|
|
45 |
Some aspects to consider: Are there any logical mistakes with the proposed algorithm? Are the corner cases correctly handled?
|
46 |
The user will provide you with a task and an output format that you will strictly follow.
|
47 |
input_variables: []
|
48 |
-
|
49 |
|
50 |
human_message_prompt_template:
|
51 |
-
_target_:
|
52 |
template: "{{query}}"
|
53 |
input_variables:
|
54 |
- "query"
|
55 |
-
|
56 |
|
57 |
init_human_message_prompt_template:
|
58 |
-
_target_:
|
59 |
template: |2-
|
60 |
# Problem statement
|
61 |
{{problem_description}}
|
@@ -81,4 +82,4 @@ init_human_message_prompt_template:
|
|
81 |
- "constraints"
|
82 |
- "python_stub"
|
83 |
- "plan"
|
84 |
-
|
|
|
18 |
- "api_output"
|
19 |
|
20 |
# ~~~ Flow specification ~~~
|
21 |
+
model_name:
|
22 |
+
openai: "gpt-4"
|
23 |
+
azure: "azure/gpt-4"
|
24 |
|
|
|
25 |
n: 1
|
26 |
max_tokens: 3000
|
27 |
temperature: 0.3
|
28 |
|
29 |
+
|
30 |
+
top_p: 0.2
|
31 |
+
frequency_penalty: 0
|
32 |
+
presence_penalty: 0
|
33 |
|
34 |
system_message_prompt_template:
|
35 |
+
_target_: flows.prompt_template.JinjaPrompt
|
36 |
template: |2-
|
37 |
Your goal is to identify potential issues with a conceptual solution to a given competitive programming problem.
|
38 |
|
|
|
46 |
Some aspects to consider: Are there any logical mistakes with the proposed algorithm? Are the corner cases correctly handled?
|
47 |
The user will provide you with a task and an output format that you will strictly follow.
|
48 |
input_variables: []
|
49 |
+
|
50 |
|
51 |
human_message_prompt_template:
|
52 |
+
_target_: flows.prompt_template.JinjaPrompt
|
53 |
template: "{{query}}"
|
54 |
input_variables:
|
55 |
- "query"
|
56 |
+
|
57 |
|
58 |
init_human_message_prompt_template:
|
59 |
+
_target_: flows.prompt_template.JinjaPrompt
|
60 |
template: |2-
|
61 |
# Problem statement
|
62 |
{{problem_description}}
|
|
|
82 |
- "constraints"
|
83 |
- "python_stub"
|
84 |
- "plan"
|
85 |
+
|
__init__.py
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
# ~~~ Specify the dependencies ~~~
|
2 |
dependencies = [
|
3 |
-
{"url": "aiflows/OpenAIChatFlowModule", "revision": "
|
4 |
{"url": "aiflows/FixedReplyFlowModule", "revision": "65fbdbe19f5a8fdc48810810812552c5674d35a5"},
|
5 |
]
|
6 |
|
|
|
1 |
# ~~~ Specify the dependencies ~~~
|
2 |
dependencies = [
|
3 |
+
{"url": "aiflows/OpenAIChatFlowModule", "revision": "821d1ba993c0be5af1b17c4e47e7dd72c4e6fd9e"},
|
4 |
{"url": "aiflows/FixedReplyFlowModule", "revision": "65fbdbe19f5a8fdc48810810812552c5674d35a5"},
|
5 |
]
|
6 |
|