Three Options (and Tips) for Creating a Python AWS Lambda Function

AWS Lambda is a service introduced in 2014 by Amazon,. It enables users to to run code without any needing a server from the user side. This means user does not need to find, configure, or maintain a server. Therefore the code setup time is reduced significantly, compared to the “old-school” from scratch server setup. The service provides logging via CloudWatch and automatic scaling. And it supports several programming languages including: Python, Ruby, Node.js, Java, Go, PowerShell, and C#. In this post we will take a closer look at the ways to use Python code as an AWS Lambda function.

First Option: Manually Created Function

The first option is to create a function from the AWS Console. When the function is created the user can go to the functions configuration tab and choose one of code entry types. These types are : Edit code inline, Upload a .zip file, or Upload a file from Amazon S3. Inline code edition will open a new online editor with a sample lambda handler. Two other options are similar and allow the user to choose the code source. These include either an uploaded zip archive or Amazon S3 URL. After that choice, the code will be displayed in the online editor if it does not exceed internal AWS limits. Here’s an example:

\\\\\\\\\\\\\\\\\\\\\\\

All code changes should be saved before testing, unfortunately there is no option to save code on-the-fly. There is however an option to choose a runtime for the function from the list of available languages. There is also a way to define the function entry point.  AWS provides an internal tool for testing via the “Test” button on function page. Clicking the button opens a form for specifying input data for testing. After the configuration is saved, the user can run the test. Test results will appear with the status, response and summary, and some information from logs will also be included.

\\\\\\\\\\\\\\\\\\\\\\\\

This approach is good for small one file functions with a minimum of additional libraries required. When additional libraries are needed they should be downloaded, installed and archived alongside with the main file, and then uploaded as a zip at AWS and referenced in the code as per the usual procedures.

Second Option: AWS SAM

The second option for creating a Python-based AWS Lambda function is to use the AWS Serverless Application Model (SAM) – an open-source framework for building AWS applications with no servers required. In order to use the tools, `aws-sam-cli` should be installed. After that installation, a new project can be created with `sam init –runtime python3.7`. Unless manually specified, the default project name will be `sam-app`.

The main configuration file here is `template.yaml`. It defines all endpoints and resources needed for the application to run properly. All dependencies are described in `requirements.txt` file. To build the dependencies command `sam build –use-container` is then executed. The command creates a directory `/.aws-sam/build/app-name` with all dependencies copied into it. To test the app locally `sam local start-api` is executed and application is accessible at `https://127.0.0.1:3000` for testing. All code goes to `app.py` file.

As soon as the code is ready, the deployment package is created. A new bucket is first created via `aws s3 mb s3://bucketname`. Then the deployment package is created with AWS SAM CLI command `sam package –output-template-file package.yaml –s3-bucket bucketname`. The newly created `package.yaml` file is similar to `template.yaml` file but has an additional `CodeUri` property pointing on the Amazon S3 bucket with the application package. Deployment itself is made via `sam deploy –template-file package.yaml –stack-name app-name –capabilities CAPABILITY_IAM –region region`. `Capabilities` parameter is used to allow AWS CloudFormation to create an IAM role for the application. This solution fits for applications with dependencies but config files are a bit of mess and requires an Amazon S3 bucket.

Third Option: Chalice

A third option that’s ideal for more complicated functions is `chalice`. `Chalice` is a microframework that allows the creation and deployment of serverless Python applications. It was designed to work with AWS and gives users a choice of creating REST APIs, periodic tasks, S3 events handlers, SQS queue listeners. A recommended way to install `chalice` is using `virtualenv` environment and `pip`. It is very important to create the environment using the correct Python version. Currently AWS supports Python 2.7, 3.6, 3.7, so in order to create project with Python 3.7, following command should be used `virtualenv –python $(which python3.7) ~/.virtualenvs/app-name`. After that step, the environment should be activated `source ~/.virtualenvs/app-name/bin/activate` and `chalice` installed `pip install chalice`.

A new project is generated via `chalice new-project app-name`. The entry point is `app.py`. All functions that should be deployed as separate lambda endpoints should be defined in this file. Before deploying the application AWS credentials should be added to the AWS configuration file. Path to this file should be `~/.aws/config` and content should have three variables defined: aws_access_key_id, aws_secret_access_key, region. In order to run the application locally `chalice local` command should be executed. This command starts server on `https://127.0.0.1:8000` and it can be hit via curl, Postman or any other tool for making requests.

Execute the `chalice deploy` command to deploy the application. This command takes an optional argument `–stage` which is useful for managing different environments. All additional libraries can be defined in the `requirements.txt` file. If there are any additional libraries, they should be downloaded, compiled and placed in the `vendor` folder before deployment. This method is tricky and requires a lot of manual work if an application relies on many additional libraries. There is an alternative though – special docker image.

To be able to use it, docker should be installed. Then `docker-lambda` should be pulled and used with the command `docker run -v ~/.aws:/root/.aws:ro -v ~/.ssh:/root/.ssh:ro -v “${CURDIR}”:/var/task -it lambci/lambda:build-python3.7 “scripts/script.sh”`. This command builds an environment similar to lambda using Python 3.7. In order to install all necessary dependencies additional script is added. This script runs the installation of all dependencies listed in `requirements.txt` file via `pip install -r requirements.txt` and after that runs `chalice deploy`.

Listed actions build the environment similar to local with all additional dependencies installed and after that deploy it without any manual library addition required. This solution seems to be the best option for RESTful applications with multiple endpoints and additional dependencies.

Configuration Notes

Here are some additional considerations for configuring lambda function through an AWS console. There is a special section called Environment variables where all necessary variables can be defined. But keep in mind that all variables are erased at each deploy. When using `chalice`, environment variables can be defined in `.chalice/config.json` in `environment_variables` section for each stage separately. This is a preferred solution because if done this way all defined variables will be set after deploy. Other notable configuration options are: ability to set an execution role to give the application permission to access CloudWatch logs, Secrets Manager, etc; setting amount of memory allocated for function execution; timeout settings; ability to connect to Virtual Private Cloud; and concurrency setting.  

Sphere’s development team can guide you on how to best create AWS Lambda functions using Python, and a thousand other techniques. Visit www.sphereinc.com to learn more about our methods and company.

 

Previous

Next