Configuration#
Note
By default, if no configuration is provided, Lithops will use the Localhost backend to run the functions.
To work with Lithops on the Cloud, you must configure at least one compute backend and one storage backend. Lithops can work both with the leading cloud providers, and using on-premise or Kubernetes platforms. You have multiple options to choose compute and storage backends based on your needs.
Lithops configuration can be provided either using a configuration file, or in runtime via a python dictionary.
Compute and Storage backends#
Choose your compute and storage engines from the table below:
Compute backends |
Storage Backends |
---|---|
Configuration File#
To configure Lithops through a configuration file you have multiple options:
Create a new file called
config
in the~/.lithops
folder.Create a new file called
.lithops_config
in the root directory of your project from where you will execute your Lithops scripts.
3. Create a new file called config in the /etc/lithops/ folder (i.e: /etc/lithops/config). Useful for sharing the config file on multi-user machines.
4. Create the config file in any other location and configure the LITHOPS_CONFIG_FILE system environment variable indicating the absolute or relative location of the configuration file:
LITHOPS_CONFIG_FILE=<CONFIG_FILE_LOCATION>
Configuration keys in runtime#
An alternative mode of configuration is to use a Python dictionary. This option allows to pass all the configuration details as part of the Lithops invocation in runtime. You can see an entire list of configuration keys at the Configuration Reference section.
Here is an example of providing configuration keys for IBM Cloud Functions and IBM Cloud Object Storage:
import lithops
config = {'lithops': {'backend': 'ibm_cf', 'storage': 'ibm_cos'},
'ibm': {'region': 'REGION',
'iam_api_key': 'IAM_API_KEY',
'resource_group_id': 'RESOURCE_GROUP_ID'}
'ibm_cos': {'storage_bucket': 'STORAGE_BUCKET'}}
def hello_world(name):
return 'Hello {}!'.format(name)
if __name__ == '__main__':
fexec = lithops.FunctionExecutor(config=config)
fexec.call_async(hello_world, 'World')
print(fexec.get_result())
Configuration Reference#
Lithops Config Keys#
Group |
Key |
Default |
Mandatory |
Additional info |
---|---|---|---|---|
lithops |
backend |
|
no |
Compute backend implementation. IBM Cloud Functions is the default. If not set, Lithops will check the mode and use the backend set under the serverless or standalone sections described below. |
lithops |
storage |
|
no |
Storage backend implementation. IBM Cloud Object Storage is the default. |
lithops |
data_cleaner |
|
no |
If set to True, then the cleaner will automatically delete all the temporary data that was written into storage_bucket/lithops.jobs. |
lithops |
monitoring |
|
no |
Monitoring system implementation. One of: storage or rabbitmq. |
lithops |
monitoring_interval |
|
no |
Monitoring check interval in seconds in case of storage monitoring. |
lithops |
data_limit |
|
no |
Max (iter)data size (in MB). Set to False for unlimited size. |
lithops |
execution_timeout |
|
no |
Functions will be automatically killed if they exceed this execution time (in seconds). Alternatively, it can be set in the call_async(), map() or map_reduce() calls using the timeout parameter. |
lithops |
include_modules |
|
no |
Explicitly pickle these dependencies. All required dependencies are pickled if default empty list. No one dependency is pickled if it is explicitly set to None. |
lithops |
exclude_modules |
|
no |
Explicitly keep these modules from pickled dependencies. It is not taken into account if you set include_modules. |
lithops |
log_level |
|
no |
Logging level. One of: WARNING, INFO, DEBUG, ERROR, CRITICAL, Set to None to disable logging. |
lithops |
log_format |
|
no |
Logging format string. |
lithops |
log_stream |
|
no |
Logging stream. eg.: ext://sys.stderr, ext://sys.stdout |
lithops |
log_filename |
no |
Path to a file. log_filename has preference over log_stream. |