Datasets:

Languages:
English
Size:
n<1K
DOI:
Libraries:
License:
Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'test' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/file_locs) changed from string to array in row 177
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 388)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/file_locs) changed from string to array in row 177

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Getting Started

First install Lean 4. Then clone this repo:

git clone --recurse-submodules https://huggingface.co/datasets/elohn/miniCodeProps

The outer LeanSrc folder is a Lean Project. You can open that folder directly in VSCode and check that the proofs in LeanSrc/Sorts.lean type check after following the instructions for working on an existing lean project in the Lean 4 documentation. The main miniCodeProps folder handles extracting the benchmark and calculating baselines. If anything fails when building Lean or running lake exe cache get from LeanSrc, the Zulip Chat is the best resource for troubleshooting.

After cloning the repo, you will need to install Lean REPL. By default, our scripts expect the repl folder to be directly inside the miniCodeProps folder. run lake build from within the repl folder.

The extract.py script is used only to create the json-formatted benchmark.

The baseline.py script contains the code we used to get our baseline results. It shows how to interact with Lean Repl programmatically, although some interactions are still somewhat buggy in that the repl will send i.e. an extra newline or weirdly formatted message that requires our script to restart the repl. Regardless, if you would like to use our setup, We ran our baselines using LLMStep. However, our code also includes a natural place to write your own function to generate tactics given the goal and file context (see get_tactics_llmstep in baseline.py). We modified the LLMStep server to return average suggestion log-probabilities per suggestion to implement best-first search.

Reproducing Baselines

First, ensure that you have installed Lean and Lean REPL as detailed above. Before running baseline.py with any arguments, check that your OS has been set at the top of utils.py. At the moment we support interacting with Lean in MacOS and Ubuntu (20.04).

Next-Step Baselines

Our experiments were run on an A100 GPU. Smaller GPUs may not be able to run Llemma7B, but will likely work with Pythia and ntp-context.

Clone our fork of LLMStep. After following the LLMStep setup instructions,

  • For Pythia2.8B, run python3 python/server_vllm.py (or, if CPU-bound, run python3 python/server.py)
  • For Llemma7B, run python3 python/server_llemma.py
  • For ntp-context-1.3B, run python3 python/server_context.py

In another terminal, run python baseline.py --bench_type nextstep

Full-Proof Baseline

run export OPENAI_API_KEY=<your key here>. Then, simply run python3 baseline.py You can also specify which openai LLM to use for proof generation via python3 baseline.py --gpt_model <your model name> although our tests only used gpt-4-turbo.

Downloads last month
79