Lean Servers ¶
Module: lean_interact.server
This module provides the LeanServer
and AutoLeanServer
classes, which are used to interact with the Lean REPL (Read-Eval-Print Loop).
The LeanServer
class is a simple wrapper around the Lean REPL, while the AutoLeanServer
class adds automatic memory management and session caching features.
LeanServer ¶
LeanServer(config: LeanREPLConfig)
This class is a Python wrapper for the Lean REPL. Please refer to the documentation for usage examples.
Warning
Instantiate a single config before starting multiprocessing. Then instantiate one LeanServer
per process by passing the config instance to the constructor. This will ensure that the REPL is already set up
for your specific environment and avoid concurrency conflicts.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
LeanREPLConfig
|
The configuration for the Lean server. |
required |
Source code in src/lean_interact/server.py
start ¶
Start the Lean REPL server process. This is called automatically in the constructor.
Source code in src/lean_interact/server.py
is_alive ¶
kill ¶
Kill the Lean REPL server process and its children.
Source code in src/lean_interact/server.py
restart ¶
get_memory_usage ¶
Get the memory usage of the Lean REPL server process in MB.
Returns:
Type | Description |
---|---|
float
|
Memory usage in MB. |
Source code in src/lean_interact/server.py
run_dict ¶
Run a Lean REPL dictionary request and return the Lean server output as a dictionary.
Info
When working with custom REPL implementations that might have incompatible interfaces with
LeanInteract's standard commands, you can use the run_dict
method to communicate directly
with the REPL using the raw JSON protocol. This method bypasses the command-specific parsing
and validation, allowing you to work with custom REPL interfaces.
Examples:
# Using run_dict to send a raw command to the REPL
result = server.run_dict({"cmd": "your Lean code here"})
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
dict
|
The Lean REPL request to execute. Must be a dictionary. |
required |
verbose
|
bool
|
Whether to print additional information during the verification process. |
False
|
timeout
|
float | None
|
The timeout for the request in seconds |
DEFAULT_TIMEOUT
|
Returns:
Type | Description |
---|---|
dict
|
The output of the Lean server as a dictionary. |
Raises:
Type | Description |
---|---|
TimeoutError
|
If the Lean server does not respond within the specified timeout. |
ConnectionAbortedError
|
If the Lean server closes unexpectedly. |
ChildProcessError
|
If the Lean server is not running. |
JsonDecodeError
|
If the Lean server output is not valid JSON. |
Source code in src/lean_interact/server.py
run ¶
run(
request: BaseREPLQuery,
*,
verbose: bool = False,
timeout: float | None = DEFAULT_TIMEOUT,
**kwargs,
) -> BaseREPLResponse | LeanError
Run a Lean REPL request.
Note
Thread-safe: Uses a threading.Lock
to ensure only one operation runs at a time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
BaseREPLQuery
|
The Lean REPL request to execute. Must be one of the following types:
|
required |
verbose
|
bool
|
Whether to print additional information |
False
|
timeout
|
float | None
|
The timeout for the request in seconds |
DEFAULT_TIMEOUT
|
Returns:
Type | Description |
---|---|
BaseREPLResponse | LeanError
|
Depending on the request type, the response will be one of the following:
|
Source code in src/lean_interact/server.py
async_run
async
¶
async_run(
request: BaseREPLQuery,
*,
verbose: bool = False,
timeout: float | None = DEFAULT_TIMEOUT,
**kwargs,
) -> BaseREPLResponse | LeanError
Asynchronous version of run()
. Runs the blocking run()
in a thread pool.
Note
Thread-safe: Uses a threading.Lock
to ensure only one operation runs at a time.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
BaseREPLQuery
|
The Lean REPL request to execute. Must be one of the following types:
|
required |
verbose
|
bool
|
Whether to print additional information |
False
|
timeout
|
float | None
|
The timeout for the request in seconds |
DEFAULT_TIMEOUT
|
Returns:
Type | Description |
---|---|
BaseREPLResponse | LeanError
|
Depending on the request type, the response will be one of the following:
|
Source code in src/lean_interact/server.py
AutoLeanServer ¶
AutoLeanServer(
config: LeanREPLConfig,
max_total_memory: float = 0.8,
max_process_memory: float | None = 0.8,
max_restart_attempts: int = 5,
session_cache: BaseSessionCache | None = None,
)
Bases: LeanServer
This class is a Python wrapper for the Lean REPL. AutoLeanServer
differs from LeanServer
by automatically restarting when it runs out of memory to clear Lean environment states. It also automatically recovers from timeouts. An exponential backoff strategy is used to restart the server, making this class slightly more friendly for multiprocessing
than LeanServer
when multiple instances are competing for resources. Please refer to the documentation for usage examples.
Note
A session cache is implemented to keep user-selected environment / proof states across these automatic restarts. Use the add_to_session_cache
parameter in the different class methods to add the command to the session cache. AutoLeanServer
works best when only a few states are cached simultaneously. You can use remove_from_session_cache
and clear_session_cache
to clear the session cache. Cached state indices are negative integers starting from -1 to not conflict with the positive integers used by the Lean REPL.
Note
The session cache is specific to each AutoLeanServer
instance and is cleared when the instance is deleted. If you want truly persistent states, you can use the pickle
and unpickle
methods to save and load states to disk.
Warning
Instantiate a single config before starting multiprocessing. Then instantiate one LeanServer
per process by passing the config instance to the constructor. This will ensure that the REPL is already set up
for your specific environment and avoid concurrency conflicts.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
LeanREPLConfig
|
The configuration for the Lean server. |
required |
max_total_memory
|
float
|
The maximum proportion of system-wide memory usage (across all processes) before triggering a Lean server restart. This is a soft limit ranging from 0.0 to 1.0, with default 0.8 (80%). When system memory exceeds this threshold, the server restarts to free memory. Particularly useful in multiprocessing environments to prevent simultaneous crashes. |
0.8
|
max_process_memory
|
float | None
|
The maximum proportion of the memory hard limit (set in |
0.8
|
max_restart_attempts
|
int
|
The maximum number of consecutive restart attempts allowed before raising a |
5
|
session_cache
|
BaseSessionCache | None
|
The session cache to use, if specified. |
None
|
Source code in src/lean_interact/server.py
restart ¶
Restart the Lean REPL server and reload the session cache.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
verbose
|
bool
|
Whether to print additional information during the restart process. |
False
|
Source code in src/lean_interact/server.py
remove_from_session_cache ¶
Remove an environment from the session cache.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
session_state_id
|
int
|
The session state id to remove. |
required |
clear_session_cache ¶
Clear the session cache.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
force
|
bool
|
Whether to directly clear the session cache. |
False
|
Source code in src/lean_interact/server.py
run_dict ¶
Warning
This method is not available with automated memory management. Please use run
, or use run_dict
from the LeanServer
class.
Raises:
Type | Description |
---|---|
NotImplementedError
|
This method is not available with automated memory management.
Please use |
Source code in src/lean_interact/server.py
run ¶
run(
request: BaseREPLQuery,
*,
verbose: bool = False,
timeout: float | None = DEFAULT_TIMEOUT,
add_to_session_cache: bool = False,
) -> BaseREPLResponse | LeanError
Run a Lean REPL request with optional session caching.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
BaseREPLQuery
|
The Lean REPL request to execute. Must be one of the following types:
|
required |
verbose
|
bool
|
Whether to print additional information |
False
|
timeout
|
float | None
|
The timeout for the request in seconds |
DEFAULT_TIMEOUT
|
Returns:
Type | Description |
---|---|
BaseREPLResponse | LeanError
|
Depending on the request type, the response will be one of the following:
|
Source code in src/lean_interact/server.py
async_run
async
¶
async_run(
request: BaseREPLQuery,
*,
verbose: bool = False,
timeout: float | None = DEFAULT_TIMEOUT,
add_to_session_cache: bool = False,
) -> BaseREPLResponse | LeanError
Asynchronous version of run()
for AutoLeanServer. Runs the blocking run()
in a thread pool.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
BaseREPLQuery
|
The Lean REPL request to execute. Must be one of the following types:
|
required |
verbose
|
bool
|
Whether to print additional information |
False
|
timeout
|
float | None
|
The timeout for the request in seconds |
DEFAULT_TIMEOUT
|
add_to_session_cache
|
bool
|
Whether to add the command to the session cache. If |
False
|
Returns:
Type | Description |
---|---|
BaseREPLResponse | LeanError
|
Depending on the request type, the response will be one of the following:
|
Source code in src/lean_interact/server.py
start ¶
Start the Lean REPL server process. This is called automatically in the constructor.
Source code in src/lean_interact/server.py
is_alive ¶
kill ¶
Kill the Lean REPL server process and its children.
Source code in src/lean_interact/server.py
get_memory_usage ¶
Get the memory usage of the Lean REPL server process in MB.
Returns:
Type | Description |
---|---|
float
|
Memory usage in MB. |