This topic contains the following sections.
Overview
Lacework’s workload security provides visibility into all processes and applications within an organization’s cloud environments such as workload/runtime analysis, and automated anomaly and threat detection.
After you install the Lacework agent, Lacework scans hosts and streams select metadata to the Lacework data warehouse to build a baseline of normal behavior, which is updated hourly. From this, Lacework can provide detailed in-context alerts for anomalous behavior by comparing each hour to the previous one. Anomaly detection uses machine learning to determine, for example, if a machine sends data to an unknown IP, or if a user logs in from an IP that has not been seen before.
Lacework offers two methods for deployment into AWS ECS Fargate. The first method is a container image direct-include approach, and the second is a sidecar based approach that utilizes a volume map. In both deployment methods, the Lacework agent runs inside the application container.
For direct-include deployments, we recommend a multi-stage image build, which starts the Lacework agent in the context of the container as a daemon.
For sidecar-based deployments, the Lacework agent sidecar container exports a storage volume that is mapped by the application container. By mounting the volume, the agent can run in the application container namespace.
Prerequisites
- AWS Fargate platform versions 1.3 or 1.4.
- A supported OS distribution, see public documentation for supported distributions.
- Lacework agent version 3.2 or later.
- The ability to run the Lacework agent as root.
Access Tokens
In an environment with mixed Fargate and non-Fargate Lacework deployments, Lacework recommends using separate access tokens for each of these deployments. This can make deployments easier to manage.
Agent Server URL
If you are a non-US user, you must configure the agent server URL. For more information, see Agent Server URL.
Deployment Methods
Lacework supports two deployment methods using ESC AWS Fargate:
- Direct-include Deployment.
- Sidecar-based Deployment, using a volume map approach.
Direct-Include Deployment
Directly including the Lacework agent is recommended for scenarios where Lacework agents can be directly embedded into the container images you build. The following steps are based on AWS ECR deployment, but you can use similar steps for other registry types.
This consists of 3 high level steps, modifying your Dockerfile to include a multi-stage build with the Lacework Agent, building and pushing the new image, and deploying.
Step 1: Add the Agent to your existing Dockerfile
This step consists of a (a) adding a build stage, (b) copying the Lacework agent binary, and (c) setting up configurations.
The following is a full example of a very simple Dockerfile along with its entrypoint script. This example adds three lines and comments indicating the Lacework Agent additions.
docker
# syntax=docker/dockerfile:1
# Lacework Agent: adding a build stage
FROM lacework/datacollector:latest-sidecar AS agent-build-image
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
jq \
sed \
&& rm -rf /var/lib/apt/lists/*
# Lacework Agent: copying the binary
COPY --from=agent-build-image /var/lib/lacework-backup /var/lib/lacework-backup
# Lacework Agent: setting up configurations
RUN --mount=type=secret,id=LW_AGENT_ACCESS_TOKEN \
mkdir -p /var/lib/lacework/config && \
echo '{"tokens": {"accesstoken": "'$( cat /run/secrets/LW_AGENT_ACCESS_TOKEN)'"}}' > /var/lib/lacework/config/config.json
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]
And the /docker-entrypoint.sh would include the following:
sh #!/bin/sh # Lacework Agent: configure and start the data collector /var/lib/lacework-backup/lacework-sidecar.sh & /path/to/your/existing/script
The RUN command uses BuildKit to securely pass the Lacework Agent access token as LW_AGENT_ACCESS_TOKEN. This is recommended, but not a requirement.
Note that it is also possible to install the Lacework Agent by fetching and installing the binaries from our official GitHub repository. Optionally, some customers choose to upload the lacework/datacollector:latest images into their ECR.
Step 2: Build and Push Image
Now that our image has been defined and created locally, it can be pushed to a container registry such as ECR.
Consider the following example script:
sh
#!/bin/sh
# Set variables for ECR
export YOUR_AWS_ECR_REGION="us-east-2"
export YOUR_AWS_ECR_URI="000000000000.dkr.ecr.${YOUR_AWS_ECR_REGION}.amazonaws.com"
export YOUR_AWS_ECR_NAME="YOUR_ECR_NAME_HERE"
# Store the Lacework Agent access token in a file (See Requirements to obtain one)
echo "YOUR_ACCESS_TOKEN_HERE" > token.key
# Build and tag the image
DOCKER_BUILDKIT=1 docker build \
--secret id=LW_AGENT_ACCESS_TOKEN,src=token.key \
--force-rm=true \
--tag "${YOUR_AWS_ECR_URI}/${YOUR_AWS_ECR_NAME}:latest" .
# Log in to ECR and push the image
aws ecr get-login-password --region ${YOUR_AWS_ECR_REGION} | docker login --username AWS --password-stdin ${YOUR_AWS_ECR_URI}
docker push "${YOUR_AWS_ECR_URI}/${YOUR_AWS_ECR_NAME}:latest"
Step 3: Deploy and run
Predeployment for AWS ECR
Before attempting to deploy tasks and services using Fargate, ensure you have the following information:
- Subnet ID.
- Security group with correct permissions.
- AWS CloudWatch log group.
- A configured ECS task execution role in IAM:
- An IAM policy, attach the existing
AmazonECSTaskExecutionRolePolicyto your ECS task execution role. - A IAM trust relationship.
- An IAM policy, attach the existing
Task definition
To run the image, AWS requires the configuration of an ECS task definition, please see the AWS documentation on task definitions for more information. Consider the example here, and for more examples, visit the AWS documentation.
json
{
"executionRoleArn": "arn:aws:iam::YOUR_AWS_ID:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/cli-run-task-definition",
"awslogs-region": "YOUR_AWS_ECR_REGION",
"awslogs-stream-prefix": "ecs"
}
},
"cpu": 0,
"environment": [
{
"name": "LaceworkAccessToken",
"value": "YOUR_ACCESS_TOKEN"
},
{
"name": "LaceworkServerUrl",
"value": "OPTIONAL_SERVER_URL_OVERRIDE"
}
],
"image": "YOUR_AWS_ECR_URI.amazonaws.com/YOUR_AWS_ECR_NAME:latest",
"essential": true,
"name": "cli-fargate"
}
],
"memory": "1GB",
"family": "cli-run-task-definition",
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "512"
}
Save this file as taskDefinition.json. Then create the cluster and register the definition with the following commands:
sh # Create a cluster. You only need to do this once. aws ecs create-cluster --cluster-name YOUR_CLUSTER_NAME_HERE-cluster # Register the task definition aws ecs register-task-definition --cli-input-json file://taskDefinition.json
Diagnostics and Debugging
List Task Definitions
$ aws ecs list-task-definitions
Create a Service
NOTE: Ensure you specify the correct subnet id (subnet-abcd1234) and securityGroup id (sg-abcd1234).
sh
$ aws ecs create-service --cluster cli-rsfargate-cluster \
--service-name cli-rsfargate-service \
--task-definition cli-run-task-definition:1 \
--desired-count 1 --launch-type "FARGATE" \
--network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234]}"
NOTE: Depending on your VPC configuration, you may need to add a public IP assignPublicIp=ENABLED to the service to be able to pull the example Docker image.
sh
$ aws ecs create-service --cluster cli-rsfargate-cluster \
--service-name cli-rsfargate-service \
--task-definition cli-run-task-definition:1 \
--desired-count 1 --launch-type "FARGATE" \
--network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234],assignPublicIp=ENABLED}"
List Services
sh $ aws ecs list-services --cluster cli-rsfargate-cluster
Describe Running Services
sh $ aws ecs describe-services --cluster cli-rsfargate-cluster --services cli-rsfargate-service
Agent Upgrades
When using a multi-stage build, the latest agent release is used anytime you rebuild the application container. The agent will also check for, and install, the latest release while it runs. Please see Agent Administration for more information.
Sidecar-Based Deployment
You can deploy Lacework for AWS Fargate using a sidecar approach, which allows you to monitor AWS Fargate containers without changing underlying container images. This deployment method leverages AWS Fargate’s Volumes from capability to map a Lacework container image directory into your application image. With this approach, the original application images stay intact and we instead modify the task definition to start and configure the Lacework agent.
This consists of 2 main steps, adding a sidecar container, and updating the configuration for your application container.
Requirements
- Understanding of the application image and whether ENTRYPOINT or CMD is used.
- The application container must have the following packages installed: openssl, ca-certificates, and either curl or wget.
- The entrypoint or command must be able to run as the root user.
This guide assumes you have already created a task definition with one application container to run in ECS Fargate. A task definition describes the containerized workloads that you want to run. For details, see creating a task definition.
Step 1: Add the sidecar to your task definition
- Click Add container to add the sidecar container.
- Call the Container name
datacollector-sidecar. - Set the Image to
lacework/datacollector:latest-sidecar. - In the Environment section, deselect the Essential checkbox.
Finally, click Add to finish adding this sidecar container.
Step 2: Update the Application container
Before continuing, you must identify whether the container's Dockerfile uses ENTRYPOINT and/or CMD instructions. This is because we will override these values to first run the Lacework agent startup script, then the existing application entry point or command. You can examine the Dockerfile used to build the container to identify ENTRYPOINT and/or CMD instructions. Alternatively, you can use tools to inspect the container image to identify usage. For example, using the Docker Hub web interface, you can inspect the container image layers to identify the usage.
Please note two important details: (a) what instructions are used, (b) the content of the instructions.
- Click the existing application container name, or Add container to add an application container.
- In the Environment section, ensure Essential is selected.
- After identifying whether the container uses
ENTRYPOINTand/orCMDinstructions, place an appropriate definition in the override section. Use the following definition.
sh /bin/sh, -c, /var/lib/lacework-backup/lacework-sidecar.sh && /path/to/exsiting/arguments
NOTE: By definition, ENTRYPOINT + CMD = default container command with arguments. It is possible that the override of ENTRYPOINT and CMD
may not provide the desired outcome. If the application container uses ENTRYPOINT, it is recommended to use ENTRYPOINT override because it takes precedence over the CMD override.
- In the Startup Dependency Ordering section, ensure that the application container has the dependency on the Lacework sidecar container so that the application container only starts after the sidecar container successfully starts. Set Container name
datacollector-sidecarto ConditionSUCCESS. - In the Storage and Logging section, ensure the application container imports the volume that is exported by the Lacework sidecar container. This makes the volume that contains the Lacework agent executable available in the application container. For Volumes from set the Source container to
datacollector-sidecar. Also select the Read only checkbox. - Click Add to finish this application container.
The task definition's container definitions now lists both the sidecar and application containers.
Diagnostics and Debugging
Predeployment
Before attempting to publish a container to ECR, ensure you have the following items and information:
- Configure the ECS task execution role to include
AmazonECSTaskExecutionRolePolicy.
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
- If it is not already configured, you may need to include some of the ECS access policies as well, such as
AmazonECS_FullAccess.
Agent Upgrades
The sidecar container always points to the latest image on hub.docker.com, for example, lacework/datacollector:latest-sidecar. This is updated to the latest stable release of the Lacework agent. Additionally, then the agent is run, it periodically checks if a new version is available.
Example task definition JSON
This is an example task definition, with some parts removed to illustrate the relevant configurations.
json
{
"taskDefinition": {
"taskDefinitionArn": "arn:aws:ecs:YOUR_AWS_REGION:YOUR_AWS_ID:task-definition/datacollector-sidecar-demo:1",
"containerDefinitions": [
{
"name": "datacollector-sidecar",
"image": "lacework/datacollector:latest-sidecar",
"cpu": 0,
"portMappings": [],
"essential": false,
"environment": [],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/datacollector-sidecar-demo",
"awslogs-region": "YOUR_AWS_REGION",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"name": "main-application",
"image": "youroganization/application-name:latest-main",
"cpu": 0,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [
"/bin/sh",
"-c"
],
"command": [
"/var/lib/lacework-backup/lacework-sidecar.sh && /docker-entrypoint.sh"
],
"environment": [
{
"name": "LaceworkAccessToken",
"value": "YOUR_ACCESS_TOKEN"
}
],
"mountPoints": [],
"volumesFrom": [
{
"sourceContainer": "datacollector-sidecar",
"readOnly": true
}
],
"dependsOn": [
{
"containerName": "datacollector-sidecar",
"condition": "SUCCESS"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/datacollector-sidecar-demo",
"awslogs-region": "YOUR_AWS_REGION",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "datacollector-sidecar-demo",
"taskRoleArn": "arn:aws:iam::YOUR_AWS_ID:role/ecsInstanceRole",
"executionRoleArn": "arn:aws:iam::YOUR_AWS_ID:role/ecsInstanceRole",
"networkMode": "awsvpc",
"revision": 1,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
// Removed for brevity
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "512",
"memory": "1024",
"registeredAt": "2021-08-25T06:20:08.202000-05:00",
"registeredBy": "arn:aws:sts::YOUR_AWS_ID:assumed-role/your-assumed-role-name/username"
}
}
Fargate Information in the Lacework Console
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents.
Fargate Task Information
To view Fargate task information, navigate to Resources > Agents then the Agent Monitor table. Note that the displayed hostname is not an actual host but rather an attribute (taskarn_containerID). Expand the hostname to display its tags, which contain Fargate task information, additional metadata collected by the agent.
This tag provides the Fargate engine version: VmInstanceType:AWS_ECSVxFARGATE
Where x can be 3 or 4 (1.3 or 1.4 engine respectively).
The following tags are prepended by net.lacework.aws.fargate.
| Tag | Description |
|---|---|
| cluster | The cluster where the task was started |
| family | The task definition family |
| pullstartedat | The time the container pull started |
| pullstoppedat | The time the container pull stopped. The diff from pullstartedat represents how long the pull took to start. |
| revision | The task revision. |
| taskarn | The full task ARN |
You can also view the above tag information at Resources > Host > Applications in the List of Active Containers table. Make the Machine Tags column visible; it is hidden by default.
Fargate Container Information
You can find container information at Resources > Host > Applications in the List of Active Container table. The container ID is available, but AWS currently does not expose underlying infrastructure.
To view tags, make the Machine Tags column visible; it is hidden by default.
The containers listed in Container Image Information are application containers. If you used the sidecar deployment method, the sidecar container itself is not displayed in this table because it is not running and has no runtime or cost associated with it. The Lacework application, however, runs as a process inside your application container, so it is visible in the Applications Information table (search for datacollector) because there is no running container for the Lacework agent.
Additional Notes
This section provides additional clarification about Lacework behavior with Fargate deployments.
FIM
By default Lacework does not perform file integrity checks within containers. This means that for a Fargate-only environment, there is no information under the Resources > Host > Files (FIM) menu. In an environment containing Fargate and non-Fargate deployments, the expected information displays. The checks are not performed by default because they are expensive for CPU and memory.
Event Triage
Lacework has the following methods for accessing Lacework events:
- Alert channels such as Email, Jira, Slack
- Lacework Console: Events
Alert Channels
Click View Details → to load the Event Details page in your default browser.
Lacework Console: Events
Click Events and set the time frame for your inspection.
A summary of the currently selected event is available in the Event Summary in the upper right quadrant of the page.
This page presents the following major launch points:
- Event Summary, you can review high level event information. In the above figure, the description contains a destination host, a destination port, an application, and a source host. Each of these links provides a resource-specific view. For example, if you click the destination host, you can view all activity for that destination host seen by the Lacework Platform.
- Timeline > Selected Event > Details, this opens the Event Details page. This is the same page that opens when you click the View Details link from an alert channel.
Triage Example
The following exercise focuses on the Event Details. and coveres the key data points to triage a Fargate event.
First the 'WHAT'
Expand the details for WHAT.
Header Row
| Key | Value Description | Notes |
|---|---|---|
| APPLICATION | What is running | None |
| MACHINE | The Fargate ARN:task for the job | Machine-centric view |
| CONTAINER | The container the task is running | None |
Application Row
| Key | Value Description | Notes |
|---|---|---|
| APPLICATION | What is running | None |
| EARLIEST KNOWN TIME | First time an application was seen | Useful for lining up incident responses with timelines in non-Lacework services. |
Container
| Key | Value Description | Notes |
|---|---|---|
| FIRST SEEN TIME | First time a container was seen | Useful for lining up incident responses with timelines in non-Lacework services. Useful for spot checks of your expected container lifecycle versus what is running. |
Then the 'WHERE'
Expand the details for WHERE. There is a WHERE section if the event is network related.
| Key | Value Description | Notes |
|---|---|---|
| HOSTNAME | Pivot to inspect the occurrence of the host activity within your environment | None |
| PORT LIST | Ports activity was seen over | Context. Do you expect this service to connect over these ports? Are there any ports that are interesting, such as non standard ports for servers. |
| IN/OUT BYTES | Data transmitted/received | Inferring risk based on payload size. |
Investigation
High level situational context that relates to the application deployment and your knowledge of its expected behavior.
Was the application involved in the event from Packaged software? In this case, the event is for nginx and tomcat. If you install tomcat / nginx using the package manager, this is marked as yes. If you install tomcat / nginx using the package manager and this is marked no, investigate if there was a change to how you build your container. If there is no change to your build process, investigate how a non packaged version was installed.
Has the Hash(SHA256) of the application been involved in an event change? Was the application involved altered / replaced. If so, check your build and upgrade process. It is typical that containers will not see application updates. A change here warrants continued investigations to determine the cause of change.
Has the application involved in the event been running as Root? Indicator of the scope of privilege the application has. If running as root is not part of your build processes, check for an alteration to your build processes and privilege escalation.
Has the application transferred more than the median amount of data compared to the last day?
Consider the amount of data transferred versus what is expected by the application.
Related Events Timeline
The timeline is useful for building context for events around the Event Details you are currently investigating.
Summary
Using the above data, you should be able to create the following triage process.
Determine the scope of the event:
- Containers
- Applications
- Hosts
- arn:task
Drill down into the event relationships to get context across your infrastructure secured by the Lacework platform.
Determine what is expected as a part of your build processes and what is unexpected.
From this path, continue to narrow down if there was a change to build processes or a net new build process or infrastructure event/service. For these scenarios, focus on updating application, build, and network configurations.
For scenarios where there is a risk of compromise, focus on the timeline that led to the potential compromise, investigate the entities involved, and create a structured story that enables rapid IR and processes updates.