Implementing a container manager in rust.

Contents:

Find the code here.

What does a container manager do?

A container manager is responsible for a subset of the functionality required to run a linux container. Typically, this functionality includes:

  • pulling images, preparing a container filesystem and bundle
  • creating and starting containers, stopping and deleting containers, and retrieving status updates from containers
  • attaching to a running container, executing another process in a container, extracting logs from it, and collecting its exit status
  • surviving restarts without impairing or losing track of running containers
  • exposing functionality to an end-user via a CLI/RPC interface
Examples of existing container managers include containerd, dockerd (which as far as container managers go, is just some branding on top of containerd), cri-o, and rkt. Most serious container managers implement the kubernetes’ container runtime interface, which is a standardizing spec for container managers. Note that the term “container runtime” and “container manager” is used somewhat ambiguously and interchangeably - I will continue referring to them as container managers.

What does a container manager not do?

Container managers themselves need to rely on other pieces of software (both above and below them in the stack) to provide their desired functionality.

  • A container manager is not responsible for setting up the linux primitives like cgroups and namespaces that actually define a linux container. That work is left to what we will call a container runtime, like runc. Container managers invoke container runtimes like runc to accomplish these “lower level” tasks.
  • A container manager is typically not directly responsible for providing the ability to attach to a running container, execute another process in a container, extract logs from it, or collect its exit status. This is usually left to what’s known as a container runtime shim, which lives for the lifetime of the container process itself. This is necessary to achieve one of the expectations stated above: container managers should be able to survive restarts without affecting their managed containers.
  • A container manager can (but usually does not) provide controller-like functionality for its containers. These use-cases will typically be accomplished by software higher up the stack that uses the API exposed by a container manager. For example, it may be desirable to restart containers that exit. A container manager does not need to provide this functionality, but another piece of software can use the container manager to implement this (kubernetes’ kubelet does this, for example).

What we will be implementing (for now)

For the initial implementation, we will implement the following functionality:

  • create, start, stop, delete, list, and get status for containers
  • survive container manager restarts gracefully
  • expose functionality to an end-use via a CLI
Notably, we will not be implementing the ability to attach/exec/log/or collect exit statuses of the containers we manage, as this will require a container runtime shim (which we will implement later). Additionally, we will not be implementing image pulling/storage/prep at this time - the prepared filesystem will simply be accepted as input to the container manager. We may implement this at some point in the future.

Design spec

High-level

As a user, I interact with my container manager by telling it to do something like “create a busybox container” or “retrieve the status of my nginx container”. A typical interaction might look like the following:

This means our container manager needs both server and client components. More specifically, the container manager itself is a daemon (the server component) that exposes its API to users like me via a CLI (the client component).

The client and server are going to need some means of communicating locally, and we’ll also need to settle on a communication protocol for them to speak.

We can perform the local communication between client and server over a Unix domain socket or a localhost TCP socket. For the communication protocol, a natural choice is gRPC (via HTTP/2) because most languages have good support for client/server gRPC bindings, and the CRI interface is defined in terms of gRPC bindings. Alternatively, we could use something like REST.

For our implementation, we’ll go with gRPC over a TCP socket bound to localhost at an agreed upon port. The current known requirements for our container manager are:
  • client (CLI)
  • server (daemon)
  • communication using gRPC over a TCP socket
This forms the high-level framework in which our container manager will be operating. Let’s next consider the different functional components of a container manager.

Container manager components

The container manager needs to prepare container bundles, maintain persistent container state to survive restarts, communicate with the low-level container runtime (runc or a runtime shim), and service client requests. Note that most container managers also provide image pulling/caching/unpacking, but we will not be implementing this piece. Based on these requirements, we can roughly break our container manager down into the following components:

  • server: shepherd incoming client requests to a handler. Manages all request-response transportation.
  • handler: the glue. It receives client requests and orchestrates interactions between the in-memory store, runtime manager, and persistent store to respond to those requests. It’s so central that we actually name this component ContainerManager in our source code.
  • persistent store: handles data on disk. This includes preparing container bundles, saving state to survive restarts, and reloading relevant data on restarts. Named ContainerStore.
  • runtime manager: interacts with the low-level container runtime. This includes creating/starting/<insert verb here>ing containers, updating container status, etc. Named ContainerRuntime.
  • in-memory store: maintains container state in-memory. This allows us to efficiently service client requests. Named ContainerMap.
Here is a diagram of the component breakdown:
Now that we can visualize the breakdown of responsibilities in our container manager, let’s consider the business logic details.

Container manager business logic details

We’ve established that a container manager needs to prepare container bundles, maintain persistent and in-memory state, restart gracefully, and leverage the low-level container runtime. To zoom in on the details and understand how these overlapping responsibilities are fulfilled, let’s explore the logic behind each API call:

  • container create
  • container start
  • container stop
  • container get
  • container list
  • container delete

Container create

Here is an example container create command from the client’s perspective:

vagrant@vagrant:~$ bin/client container create my_container --rootfs=~/tmp/rootfs/ echo Hello World
  • bin/client container create → invoke the client executable and specify that we’re performing a container creation
  • my_container → the container name
  • --rootfs=~/tmp/rootfs → specify the container’s root filesystem
  • echo → the command to execute in our container
  • Hello World → the args provided to our command
After processing this request, the response from the create container invocation is:
created: 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
We’re given back the container ID 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2. We can use this ID in future interactions with the container manager. As input, the container manager accepts a container name, root filesystem, command and arguments. It processes the request, and as output it produces a container ID. Under the hood, here’s what the container manager needed to do:
  • Generate a random ID for the container. From our example, this is 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
  • Create the container directory. This will house the container root filesystem, container specification file, and serialized container state. It will live at lib_root/containers/6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
  • Create the container bundle inside the container directory. The container bundle is a format defined by the Open Container Initiative that includes all the information needed to load and run a container. It will live at lib_root/containers/6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2/bundle. Creating the container bundle requires us to:
    • Copy the provided container root filesystem to the container bundle directory. This will live at lib_root/containers/6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2/bundle/rootfs
    • Create the container spec file (a JSON file) in the container bundle directory, according to the OCI spec. This will live at lib_root/containers/6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2/bundle/config.json. Update the spec file with provided command echo and arguments Hello World
  • Now that our container bundle is prepared, execute the low-level container runtime (runc) container create command. We need to handle errors (rolling back any of our changes on failure), persist container state to disk/memory, and report the created container ID to the user
Here is the code that the container manager runs to handle a create request:
/// create_container does the following:
/// - invoke create_container_helper to create the container
/// - on an error, invoke rollback_container_create to clean up leftover
///   state, including in-memory container and container directory on disk
pub fn create_container(
   &self,
   opts: ContainerOptions,
) -> Result<String, ContainerManagerError> {
   self.create_container_helper(opts).or_else(|err| {
       // best effort rollback
       self.rollback_container_create(&err.container_id);
       return Err(err.source);
   })
}

/// create_container_helper does the following:
/// - generate container id
/// - create and store the in-memory container structure
/// - create the container directory on disk
/// - create the container bundle:
///     - copy the rootfs into the container bundle
///     - generate the runc spec for the container
/// - create the container (runc exec)
/// - update container status, write those to disk
fn create_container_helper(
   &self,
   opts: ContainerOptions,
) -> Result<String, InternalCreateContainerError> {
   // generate container id
   let container_id = rand_id();
   // create & store in-memory container structure
   let container: Container =
       new_container(&container_id, &opts.name, &opts.command, &opts.args);
   let container_id =
       self.container_map
           .add(container)
           .map_err(|err| InternalCreateContainerError {
               container_id: container_id.clone(),
               source: err.into(),
           })?;
   // create container directory on disk
   self.container_store
       .create_container_directory(&container_id)
       .map_err(|err| InternalCreateContainerError {
           container_id: container_id.clone(),
           source: err.into(),
       })?;
   // create container bundle on disk
   let container_bundle_dir = self
       .container_store
       .create_container_bundle(&container_id, &opts.rootfs_path)
       .map_err(|err| InternalCreateContainerError {
           container_id: container_id.clone(),
           source: err.into(),
       })?;
   // create container runtime spec on disk
   let spec_opts =
       RuntimeSpecOptions::new(container_bundle_dir.clone(), opts.command, opts.args);
   self.container_runtime
       .new_runtime_spec(&spec_opts)
       .map_err(|err| InternalCreateContainerError {
           container_id: container_id.clone(),
           source: err.into(),
       })?;
   // create container
   let create_opts = RuntimeCreateOptions::new(
       container_bundle_dir.clone(),
       "container.pidfile".into(),
       container_id.clone(),
   );
   self.container_runtime
       .create_container(create_opts)
       .map_err(|err| InternalCreateContainerError {
           container_id: container_id.clone(),
           source: err.into(),
       })?;
   // update container creation time, status, and persist to disk
   self.update_container_created_at(&container_id, SystemTime::now())
       .map_err(|source| InternalCreateContainerError {
           container_id: container_id.clone(),
           source,
       })?;
   self.update_container_status(&container_id, Status::Created)
       .map_err(|source| InternalCreateContainerError {
           container_id: container_id.clone(),
           source,
       })?;
   self.atomic_persist_container_state(&container_id)
       .map_err(|source| InternalCreateContainerError {
           container_id: container_id.clone(),
           source,
       })?;
   Ok(container_id)
}

Our container is now in a Created state, which means it’s ready to be run!

Container start

Once our container is in a Created state, it is ready to be started. The start command looks as follows from the client’s perspective:

vagrant@vagrant:~$ bin/client container start 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
and the response will be:
started: 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
As input, the container manager needed a container ID that is in a created state, and after handling the request, it output that container ID back to us. Under the hood, the container manager needs to do the following things when it receives a start request:
  • Verify that the container exists and is in a Created state.
  • Invoke the low-level container runtime to start the container. This means our desired process will begin.
  • Update the container state in memory (we now have a container start time and the new status should be Running) and persist this state to disk.
Here is the code that the container manager runs to handle a start request:
/// start_container does the following:
/// - ensure container exists and is in created state
/// - start the container via the container runtime
/// - update container start time and status, then persist
pub fn start_container(&self, container_id: &ID) -> Result<(), ContainerManagerError> {
   // ensure container exists and is in created state
   match self.container_map.get(container_id) {
       Ok(container) => {
           if container.status != Status::Created {
               return Err(
                   ContainerManagerError::StartContainerNotInCreatedStateError {
                       container_id: container_id.clone(),
                   },
               );
           }
       }
       Err(err) => return Err(err.into()),
   }
   // container start
   self.container_runtime.start_container(container_id)?;
   // update container start time and status in memory, then persist to disk
   //     this current approach just optimistically sets the container to
   //     running and allows future calls to get/list to synchronize with runc.
   //     one other way we could consider doing this is polling runc until we
   //     see that the container is running and then updating.
   self.update_container_started_at(&container_id, SystemTime::now())?;
   self.update_container_status(&container_id, Status::Running)?;
   self.atomic_persist_container_state(&container_id)
}

Our container is now in a Running state. We can now get the container from our container manager to follow its status, or stop the container. Let’s look at the get command next, since we need to explore how we handle the state of containers that exit on their own.

Container get (and list)

The get command retrieves the status of a container, whether it’s in a Created, Running, or Stopped state. Directly after creating the container, the get request looks as follows from the client’s perspective:

vagrant@vagrant:~$ bin/client container get 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
and the response will be:
  • ID: 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
  • NAME: my_container
  • STATUS: Created
  • CREATED: 2020-08-19T14:03:23.334750031+00:00
  • STARTED_AT: Not started yet.
  • COMMAND: echo
  • ARGS: Hello,World
After starting a container, it’s possible that the container will exit on its own (either successfully or due to an internal error). In our example container, we’re executing the command echo Hello World which we should expect to exit almost immediately. This means that after starting the container, our next call to get should recognize that the container has stopped, updating the status in the process. For example:
vagrant@vagrant:~$ cruise_client container start 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2 && sleep 2
vagrant@vagrant:~$ cruise_client container get 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
will inform us that the status is now Stopped. Under the hood, this requires the container manager to synchronize itself with the low-level container runtime to determine the current status of the container before reporting it back to the user. Note that a container runtime shim often handles this synchronization on an event driven basis for the container manager. The shim is also able to provide more rich information such as a precise finishing time and the exit code of the container process. Our container manager is going to be comparatively simple, and in the future will integrate with a container runtime shim to provide a more full picture. To handle a get request, our container manager does the following:
  • It cheaply verifies that the container is known to exist by checking its in-memory store.
  • After verifying the container’s existence, the container manager needs to reconcile the true state of the world with the state of the world that it holds in memory and on disk. In the time that’s passed between starting the container and the arrival of the get request, the container manager does not know if the container has exited. To perform this reconciliation, the container manager retrieves the current state of the container from the low-level container runtime, and updates its in-memory and on-disk stores.
  • The container manager is now confident that it has the true state of the world, and returns that to the client.
Here is the code that the container manager runs to handle a get request:
/// get_container does the following:
/// - synchronize container state with the container runtime, which fails
///   if the container does not exist
/// - return container state from memory
pub fn get_container(
   &self,
   container_id: &ID,
) -> Result<Box<Container>, ContainerManagerError> {
   self.sync_container_status_with_runtime(container_id)?;
   self.container_map
       .get(container_id)
       .map_err(|err| err.into())
}

/// list_containers does the following:
/// - for every known container, synchronize container state with the
///   container runtime, which fails if any of the containers do not exist
/// - return container states from memory
pub fn list_containers(&self) -> Result<Vec<Container>, ContainerManagerError> {
   match self.container_map.list() {
       Ok(containers) => {
           for container in containers.iter() {
               self.sync_container_status_with_runtime(container.id())?;
           }
       }
       Err(err) => return Err(err.into()),
   };
   self.container_map.list().map_err(|err| err.into())
}

See that the list command is just a generalized form of a container get. Instead of synchronizing and returning a specific container ID, the container manager goes through its entire in-memory store, refreshing the state of all containers, and returning those results back to the client in their entirety.

Container stop

Once our container is in a Running state, if we don’t wish for it to continue running (and it has not yet exited), we can choose to stop the container. Note that stopping and deleting a container are distinct operations, and delete can only proceed if a container is in a Stopped state (or if it has not yet been started). The stop command looks as follows from the client’s perspective:

vagrant@vagrant:~$ bin/client container stop 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
and the response will be:
stopped: 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
If we were to get this container, it would now be reporting a Stopped status. Under the hood, our container manager does the following in response to a stop request:
  • It verifies that the container exists and is in a Running state via its in-memory store.
  • After verifying existence and Running status, the container manager instructs the low-level container runtime to kill the container process (by sending it a SIGTERM or SIGKILL).
  • The container manager updates the container status to Stopped, storing the changes in-memory and persisting to disk as well.
Here is the code that the container manager runs to handle a stop request:
/// stop_container does the following:
/// - ensure container exists and is in running state
/// - send a SIGKILL to the container via the container runtime
/// - update container status, then persist
pub fn stop_container(&self, container_id: &ID) -> Result<(), ContainerManagerError> {
   // ensure container exists and is in running state
   match self.container_map.get(container_id) {
       Ok(container) => {
           if container.status != Status::Running {
               return Err(ContainerManagerError::StopContainerNotInRunningStateError {
                   container_id: container_id.clone(),
               });
           }
       }
       Err(err) => return Err(err.into()),
   }
   // send SIGKILL to container via the container runtime
   self.container_runtime.kill_container(container_id)?;
   // update container status and persist to disk
   self.update_container_status(&container_id, Status::Stopped)?;
   self.atomic_persist_container_state(&container_id)
}

At this point, the container process is no longer running and the container is in a Stopped state. The container metadata still exists in our container manager and the low-level container runtime, and the container filesystem is still present on disk if we wished to inspect it. Our container is now eligible for deletion, which will free those resources.

Container delete

Once a container is in a Stopped state, it is eligible for deletion, which will free the resources it occupies even after the container process terminates. The delete command, unsurprisingly, looks like the following from the client’s perspective:

vagrant@vagrant:~$ bin/client container delete 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
and the response will be:
deleted: 6bce9dc1-0a03-4bb7-86f5-dd75fdae7fa2
At this point, the container and the resources it occupies no longer exist. Calls to get will fail, and calls to list will not find it. Under the hood, the container manager does the following in response to a delete call:
  • The container manager verifies that the container exists and is in a Stopped state (it is also valid to delete containers that are not yet started, meaning they are in a Created state).
  • After verifying existence and Stopped or Created status, the container manager tells the low-level container runtime to delete the container, which allows it to clean up its state related to this container.
  • Finally, the container manager removes its in-memory copy of the container and purges all container state from disk (both the container manager’s bookkeeping data and the container filesystem itself).
Here is the code that the container manager runs to handle a delete request:
/// delete_container does the following:
/// - ensure container exists and is in stopped state
/// - tell the container runtime to delete the container
/// - remove remnants of container in memory and on disk
pub fn delete_container(&self, container_id: &ID) -> Result<(), ContainerManagerError> {
   // ensure container exists and is in stopped state
   match self.container_map.get(container_id) {
       Ok(container) => {
           if container.status != Status::Stopped && container.status != Status::Created {
               return Err(
                   ContainerManagerError::DeleteContainerNotInDeleteableStateError {
                       container_id: container_id.clone(),
                   },
               );
           }
       }
       Err(err) => return Err(err.into()),
   }
   // instruct container runtime to delete container
   self.container_runtime.delete_container(container_id)?;
   // remove container from memory and disk
   self.container_map.remove(&container_id);
   self.container_store
       .remove_container_directory(&container_id);
   Ok(())
}

At this point, no trace of the container exists.

Surviving restarts

Until now, we’ve examined how a container manager carries out its explicit responsibilities of manipulating containers in response to client requests. As we stated earlier, another requirement is that the container manager be able to survive restarts without impacting the containers it manages. This allows us to upgrade the container manager on the fly, and more importantly, it decouples the stability of the containers from the stability of the container manager. This means that if the container manager were to crash for some reason, our existing containers would continue running happily. In discussing the business logic behind create, start, stop, get, list, and delete, we’ve already encountered some of the required groundwork that allows our container manager to survive restarts. Specifically, we persist container state to disk at every possible opportunity. This is important because that on-disk state is used to reconstruct the state of the world at container manager startup time. In fact, every time the container manager starts, it will attempt to reconstruct the state of the world, even the first time it starts on a machine (in which case the reconstruction is a no-op). It’s important to recognize we have no guarantee that the state of the world at restart is the same as the state of the world when the container manager last stopped. This means that in addition to reloading the state from disk, the container manager needs to resync the state of every container it knew about prior to stopping. The goals of the restart are as follows:

  • Find every container the container manager knew about when it was last running.
  • Resync the state of those previously known containers.
  • Save in-memory and persist to disk the updated state of those containers.
  • Purge any out-of-date state on disk.
Fortunately, while the container manager does need to resync the state of any containers it knew about previously, it does not need to “find” any new containers that may have been created while it was not running because no new containers could have been created while the container manager was stopped. Under the hood, the container manager runs the following reload routine every time it starts up (even if this is in fact the first time it starts up):
  • Read all container state files from disk at a known location. These are the files that we write on container creation and update throughout the lifecycle of the container.
    • If any of these state files fails to be parsed, we abandon the container and remove its state.
  • Add the container to the in-memory store.
  • Sync the container state with the low-level container runtime, persisting it in memory and on disk.
Here is the container manager’s reload code, run every time it starts:
/// reload does the following:
/// - reads all container state files off disk
///     - if any of these state files fail to be parsed, we assume the
///       container is corrupted and remove it
/// - adds the container to the in-memory store
/// - syncs the container state with the container runtime
fn reload(&self) -> Result<(), ContainerManagerError> {
   // get container ids off disk
   let container_ids = self
       .container_store
       .list_container_ids()
       .map_err(|source| ContainerManagerError::ReloadError { source })?;
   for container_id in container_ids {
       // parse container state file
       let container = match self.container_store.read_container_state(&container_id) {
           Ok(container) => container,
           Err(err) => {
               error!(
                   "unable to parse state of container `{}`, err: `{}`. Removing container.",
                   container_id, err
               );
               self.container_store
                   .remove_container_directory(&container_id);
               continue;
           }
       };
       // add container to in-memory store
       match self.container_map.add(container) {
           Ok(_) => (),
           Err(err) => {
               error!(
                   "unable to add container `{}` to in-memory state, err: `{:?}`. Continuing.",
                   container_id, err
               );
               continue;
           }
       }
       // sync container with container runtime
       match self.sync_container_status_with_runtime(&container_id) {
           Ok(_) => (),
           Err(err) => {
               error!(
                   "unable to sync state of container `{}`, err: `{:?}`. Removing container.",
                   container_id, err
               );
               self.container_store
                   .remove_container_directory(&container_id);
               self.container_map.remove(&container_id);
               continue;
           }
       }
   }
   Ok(())
}

At this point, the container manager is up-to-date with the current state of the world and can proceed with normal operations.

Let’s see it in action

I ran this in a Vagrant box with Ubuntu 18.04. Because our container manager shells out to runc under the hood, it needed to be a linux distro to work properly. Here’s what I put in my Vagrantfile:

Vagrant.configure("2") do |config|
    config.vm.box = "hashicorp/bionic64"
end

I then set up the environment as follows:
# in the directory with your Vagrantfile, setup Vagrant box
$ vagrant up

# login
$ vagrant ssh

# install gcc
$ sudo apt-get update
$ sudo apt install -y gcc

# install rust, this takes a moment
$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# add cargo to your PATH environment variable
$ source $HOME/.cargo/env

# install docker (used to create container root filesystem)
$ sudo curl -sSL https://get.docker.com/ | sh

# add the vagrant user to the docker group so we don't need to run docker commands as root
$ sudo usermod -aG docker vagrant

# logout and back in for the group change to take effect
$ logout
$ vagrant ssh

Now I can run the daemon in my Vagrant box:
# clone the project
$ git clone https://github.com/willdeuschle/cruisecd cruise

# build the project (daemon and client)
$ cargo build

# start the daemon, specifying its root directory and the path to runc
$ target/debug/daemon run --lib_root=./tmp/lib_root --runtime_path=/usr/bin/runc

Now it’s time to interact with the daemon using the client. From a new shell, start another session in your Vagrant box:
# from a new shell, for the client
# in the directory with your Vagrantfile, login to your Vagrant box
$ vagrant ssh

# create rootfs for container
$ cd cruise && mkdir -p tmp/rootfs
$ docker export $(docker create busybox) | tar -C tmp/rootfs -xf -

# create container
$ target/debug/client container create my_container --rootfs=tmp/rootfs/ sh -- -c "echo hi; sleep 60; echo bye"
> created: 3a92e711-034e-410f-8aa9-700ae23c3a8d

# start container
$ target/debug/client container start 3a92e711-034e-410f-8aa9-700ae23c3a8d
> started: 3a92e711-034e-410f-8aa9-700ae23c3a8d

At this point, if we switch back to the daemon shell, we should see hi output over there from our new container!
# back in the daemon shell
$ target/debug/daemon run --lib_root=./tmp/lib_root --runtime_path=/usr/bin/runc
...
> hi

If we switch back to our client shell, we can interact some more with our container:
# from the client shell
# get container status
$ target/debug/client container get 3a92e711-034e-410f-8aa9-700ae23c3a8d
> ID                                   NAME         STATUS  EXIT_CODE CREATED_AT                          STARTED_AT                          FINISHED_AT COMMAND ARGS
  3a92e711-034e-410f-8aa9-700ae23c3a8d my_container Running -1        2020-08-30T23:46:45.788010499+00:00 2020-08-30T23:47:19.355861796+00:00 n/a         sh      -c, echo hi; sleep 60; echo bye

For the next minute, we will find that our container is in a Running state (while it sleeps). After a minute, in our daemon shell, we will see bye output from our container:
# back in the daemon shell
$ target/debug/daemon run --lib_root=./tmp/lib_root --runtime_path=/usr/bin/runc
...
> hi
...
> bye

and the container will then transition into a Stopped state:
# from the client shell
# get container status
$ target/debug/client container get 3a92e711-034e-410f-8aa9-700ae23c3a8d
> ID                                   NAME         STATUS  EXIT_CODE CREATED_AT                          STARTED_AT                          FINISHED_AT COMMAND ARGS
  3a92e711-034e-410f-8aa9-700ae23c3a8d my_container Stopped -1        2020-08-30T23:46:45.788010499+00:00 2020-08-30T23:47:19.355861796+00:00 n/a         sh      -c, echo hi; sleep 60; echo bye

We can now clean up the container:
# from the client shell
# delete container
$ target/debug/client container delete CONTAINER_ID
> deleted: 3a92e711-034e-410f-8aa9-700ae23c3a8d

And if we list our containers, we will see our container no longer exists:
# list containers
$ target/debug/client container list
> ID NAME STATUS EXIT_CODE CREATED_AT STARTED_AT FINISHED_AT COMMAND ARGS

And that’s all for now! We have a minimal container manager. Next up, we will implement a container runtime shim to inject some more functionality and interactivity into our nascent container ecosystem.