Building deal.ii Files with Docker
Official deal.ii Docker image
Deal.ii provides a docker image, primarily for testing out deal.ii before installing it on your system. However, this image can also be used to compile and run deal.ii programs without installing it natively on your system.
Creating your own image
You can very easily create your own docker image using the official image as a template and install the basic tools necessary for your project.
Here is a simple docker image which add a LSP server and uv for running python
scripts for post processing:
FROM dealii/dealii:latest
USER root
# Install zsh
RUN apt-get update -y && apt-get install -y clangd zsh
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Set default shell and user
USER root
SHELL ["/usr/bin/zsh", "-c"]Compiling code with your container
Launching a shell inside your container
You can launch a shell in your container using the docker run command and
additionally use the options -v to specify where to mount your project on this
shell and -w to specify which directory your shell will run in.
docker run --rm -it \
-v "$(pwd):$(pwd)" \
-w "$(pwd)" \
neuroconvergent/deal-ii \
zshRunning commands directly inside your container
Similar to launching a shell, you can execute any command inside of your
container using the docker run command. You can use this to replace the
compile commands for your project to compile them with docker. You can even
replace the LSP command to run your LSP server inside of the container so that
you get proper LSP support that matches the build environment.
docker run --rm -it \
-v "$(pwd):$(pwd)" \
-w "$(pwd)" \
neuroconvergent/deal-ii \
makedocker run --rm -it \
-v "$(pwd):$(pwd)" \
-w "$(pwd)" \
neuroconvergent/deal-ii \
clangdConfiguring Neovim LSP
As mentioned previously, you can set your LSP server to start inside of your
container. To configure
nvim to automatically launch the LSP server inside of the container for
deal.ii projects, we can change the LSP command based on the CWD of nvim.
local deal_project_roots = {
"/home/neuroconvergent/Programming/KMC-AM",
}
local cwd = vim.fn.getcwd()
for _, root in ipairs(deal_project_roots) do
if cwd == root then
vim.lsp.config.clangd = {
cmd = {
"docker",
"run",
"--rm",
"-i",
"-v",
"/home/neuroconvergent:/home/neuroconvergent",
"-w",
root,
"neuroconvergent/deal-ii",
"clangd",
"--background-index",
"--clang-tidy",
"--header-insertion=iwyu",
"--completion-style=detailed",
"--fallback-style=llvm",
},
}
break
end
end
vim.lsp.enable("clangd")Automating builds with ninja
Ninja is a make-like build system for defining build and run commands. However,
ninja is much faster than make. You can create a build.ninja file to
define build and compile commands to be run in the Docker container and use it as
an automation tool.
This Docker-based workflow is essential for the kMC-FEA project,
ensuring consistent compilation of the deal.II codebase used in the
transient thermal model and other FEA
simulations. The containerized build environment supports both the
kMC method implementation and the
JMAK model integration, providing reproducible builds across
development machines.