Configuration#

Note

By default, if no configuration is provided, Lithops will use the Localhost backend to run the functions.

To work with Lithops on the Cloud, you must configure at least one compute backend and one storage backend. Lithops can work both with the leading cloud providers, and using on-premise or Kubernetes platforms. You have multiple options to choose compute and storage backends based on your needs.

Lithops configuration can be provided either using a configuration file, or in runtime via a python dictionary.

Compute and Storage backends#

Choose your compute and storage engines from the table below:

Compute backends

Storage Backends

Configuration File#

To configure Lithops through a configuration file you have multiple options:

  1. Create a new file called config in the ~/.lithops folder.

  2. Create a new file called .lithops_config in the root directory of your project from where you will execute your Lithops scripts.

3. Create a new file called config in the /etc/lithops/ folder (i.e: /etc/lithops/config). Useful for sharing the config file on multi-user machines.

4. Create the config file in any other location and configure the LITHOPS_CONFIG_FILE system environment variable indicating the absolute or relative location of the configuration file:

LITHOPS_CONFIG_FILE=<CONFIG_FILE_LOCATION>

Configuration keys in runtime#

An alternative mode of configuration is to use a Python dictionary. This option allows to pass all the configuration details as part of the Lithops invocation in runtime. You can see an entire list of configuration keys at the Configuration Reference section.

Here is an example of providing configuration keys for IBM Cloud Functions and IBM Cloud Object Storage:

import lithops

config = {
   'lithops': {
      'backend': 'code_engine',
      'storage': 'ibm_cos'
   },
   'ibm': {
      'region': 'REGION',
      'iam_api_key': 'IAM_API_KEY',
      'resource_group_id': 'RESOURCE_GROUP_ID'
   },
   'ibm_cos': {
      'storage_bucket': 'STORAGE_BUCKET'
   }
}

def hello_world(number):
   return f'Hello {number}!'

if __name__ == '__main__':
   fexec = lithops.FunctionExecutor(config=config)
   fexec.map(hello_world, [1, 2, 3, 4])
   print(fexec.get_result())

Configuration Reference#

Lithops Config Keys#

Group

Key

Default

Mandatory

Additional info

lithops

backend

aws_lambda

no

Compute backend implementation. Default is AWS Lambda.

lithops

storage

aws_s3

no

Storage backend implementation. Default is AWS S3.

lithops

data_cleaner

True

no

If True, automatically deletes temporary data written to storage_bucket/lithops.jobs.

lithops

monitoring

storage

no

Monitoring system implementation. Options: storage or rabbitmq.

lithops

monitoring_interval

2

no

Interval in seconds for monitoring checks (used if monitoring is set to storage).

lithops

data_limit

4

no

Maximum size (in MB) for iterator data chunks. Set to False for unlimited size.

lithops

execution_timeout

1800

no

Maximum execution time for functions in seconds. Functions exceeding this time are killed. Can also be set per call using the timeout parameter.

lithops

include_modules

[]

no

List of dependencies to explicitly include for pickling. If empty, all required dependencies are included. If set to None, no dependencies are included.

lithops

exclude_modules

[]

no

List of dependencies to explicitly exclude from pickling. Ignored if include_modules is set.

lithops

log_level

INFO

no

Logging level. Options: WARNING, INFO, DEBUG, ERROR, CRITICAL. Set to None to disable logging.

lithops

log_format

%(asctime)s [%(levelname)s] %(name)s – %(message)s

no

Format string for log messages.

lithops

log_stream

ext://sys.stderr

no

Logging output stream, e.g., ext://sys.stderr or ext://sys.stdout.

lithops

log_filename

``

no

File path for logging output. Takes precedence over log_stream if set.

lithops

retries

0

no

Number of retries for failed function invocations when using the RetryingFunctionExecutor. Default is 0. Can be overridden per API call.