小康文章阅读笔记

小康文章阅读笔记

自用Docker一览

2025-05-31

如下是我自用的一些Docker

1. gromacs-GPU Docker

自写gromacs Docker,CUDA版本12.1,gromacs版本 gromacs-2024.5

可从此下载:Dockerfile

# Use an official Ubuntu base image
FROM nvidia/cuda:12.1.1-devel-ubuntu22.04

# Set environment variables to avoid interactive prompts during installation
ENV DEBIAN_FRONTEND=noninteractive
SHELL ["/bin/bash", "-c"]

# Install dependencies
RUN apt-get update && && apt-get upgrade -y && apt-get install -y \
    libgomp1 \
    liblapack3 \
    openmpi-common \
    build-essential \
    cmake \
    python3 \
    wget \
    openmpi-bin \
    libopenmpi-dev \
    && apt-get clean && rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /usr/local/src

RUN echo "export OMPI_ALLOW_RUN_AS_ROOT=1" >> ~/.bashrc && \
    echo "export OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1" >> ~/.bashrc  && \
    source ~/.bashrc

# Download and install GROMACS
RUN wget https://ftp.gromacs.org/gromacs/gromacs-2024.5.tar.gz && \
    tar xfz gromacs-2024.5.tar.gz && \
    cd gromacs-2024.5 && \
    mkdir build && cd build && \
    cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=CUDA -DGMX_MPI=on && \
    make -j$(nproc) && make install && \
    cd ../.. && rm -rf gromacs-2024.5*


# Source GROMACS
RUN echo "source /usr/local/gromacs/bin/GMXRC" >> ~/.bashrc

# Set entrypoint
# ENTRYPOINT ["gmx"]

运行例子:

docker run -it --rm --gpus all  -v $HOME/shiyan/zhangxuan/protein-ligand-md:$HOME/shiyan gromacs:2022

2. Dl_binder_design

自写的dl_binder_design 不包含rfdiffusion部分

可从此下载dl_binder_design.zip

# Use an official Python runtime as a parent image
FROM nvcr.io/nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04

# Set the working directory in the container
WORKDIR /app

COPY . /app


ENV PATH="/app/miniconda/bin:$PATH"

# Install Conda Mini (Miniconda)
RUN apt-get -q update && apt-get install -y wget git parallel && \
    wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py311_25.3.1-1-Linux-x86_64.sh -O miniconda.sh && \
    bash miniconda.sh -b -p /app/miniconda && /app/miniconda/bin/conda init bash  && \
    rm miniconda.sh && \
    /app/miniconda/bin/conda clean -a -y && cd dl_binder_design/ && \
    conda config --add channels https://levinthal:paradox@conda.graylab.jhu.edu && \
    conda config --add channels conda-forge && \
    cd include && conda env create -f proteinmpnn_fastrelax.yml && \
    conda env create -f af2_binder_design.yml && \
    cd ../mpnn_fr && git clone https://github.com/dauparas/ProteinMPNN.git && \
    cd ../af2_initial_guess/ && mkdir -p model_weights/params

WORKDIR /app/dl_binder_design

# Default command
CMD ["bash"]

假设权重位于~/install/dl_binder_design/model_weights之中

运行例子:

docker run -it --rm --gpus all -v $HOME/install/dl_binder_design/model_weights:/app/dl_binder_design/af2_initial_guess/model_weights dl:latest

假设你的数据位于~/shiyan/luc/xxx-2

docker run -it --rm --gpus all -v $HOME/install/dl_binder_design/model_weights:/app/dl_binder_design/af2_initial_guess/model_weights -v $HOME/shiyan/luc/xxx-2:/app/xxx-2 dl:latest

3. PyTorch Geometric Docker镜像

本镜像我使用Copilot写的,写的非常棒

# FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04
FROM nvidia/cuda:12.6.2-cudnn-devel-ubuntu22.04

# Set up environment
ENV DEBIAN_FRONTEND=noninteractive

# Install Python and system dependencies
RUN apt-get update && \
    apt-get install -y python3 python3-pip python3-dev git && \
    rm -rf /var/lib/apt/lists/*

# Upgrade pip
RUN python3 -m pip install --upgrade pip && pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

# Install PyTorch (CUDA 12.1)
RUN pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126

# Install PyTorch Geometric and dependencies
RUN pip install torch_geometric
RUN pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.6.0+cu126.html

# Install additional Python packages
RUN pip install numpy pandas matplotlib scikit-learn openpyxl seaborn tqdm rdkit==2024.9.6 jupyterlab

# Set up working directory
WORKDIR /app

# Expose Jupyter Notebook port
EXPOSE 8888

CMD ["jupyter", "lab", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root", "--NotebookApp.token=''"]

构建镜像

docker build -t my-pyg-image .

运行容器:

docker run -it --rm -p 8888:8888 -v ${PWD}\app:/app my-pyg-image

4.ThermoMPNN-D

Dockfile文件:

# Use an official Python runtime as a parent image
FROM hub.mirrorsite.site/nvidia/cuda:11.7.1-devel-ubuntu22.04

# Set the working directory in the container
WORKDIR /app

COPY . /app


ENV PATH="/app/miniconda/bin:$PATH"

# Install Conda Mini (Miniconda)
RUN apt-get -q update && apt-get install -y wget git && \
    wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py311_25.3.1-1-Linux-x86_64.sh -O miniconda.sh && \
    bash miniconda.sh -b -p /app/miniconda && /app/miniconda/bin/conda init bash  && \
    rm miniconda.sh && \
    conda config --set show_channel_urls yes &&\
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main &&\
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r  &&\
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2 &&\
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch &&\
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge &&\
    /app/miniconda/bin/conda clean -a -y &&\
    cd ThermoMPNN-D &&\
    conda env create -f environment.yaml

WORKDIR /app/Thermo

# Default command
ENV PYTHONPATH=/app/ThermoMPNN-D
CMD ["bash"]

运行容器:

 docker run -it --rm --gpus all -v D:/install/thermompnn-I:/app/thermompnn-I thermompnn

5.BindCraft

Dockfile文件:

# Use an official Python runtime as a parent image
FROM nvcr.io/nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04

# Set the working directory in the container
WORKDIR /app

COPY . /app/Bindcraft


ENV PATH="/app/miniconda/bin:$PATH"

# Install Conda Mini (Miniconda)
RUN apt-get -q update && apt-get install -y wget git && \
    wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-py311_25.3.1-1-Linux-x86_64.sh -O miniconda.sh && \
    bash miniconda.sh -b -p /app/miniconda && /app/miniconda/bin/conda init bash  && \
    rm miniconda.sh && \
    /app/miniconda/bin/conda clean -a -y &&\
    cd Bindcraft -D &&\
    chmod u+x install_bindcraft.sh &&\
    ./install_bindcraft.sh --cuda '12.1' --pkg_manager 'conda'

WORKDIR /app/Bindcraft

# Default command
CMD ["bash"]

其原来的脚本我做了一定的注释

# BindCraft installation script

# AlphaFold2 weights
# echo -e "Downloading AlphaFold2 model weights \n"
# params_dir="${install_dir}/params"
# params_file="${params_dir}/alphafold_params_2022-12-06.tar"

# # download AF2 weights
# mkdir -p "${params_dir}" || { echo -e "Error: Failed to create weights directory"; exit 1; }
# wget -O "${params_file}" "https://storage.googleapis.com/alphafold/alphafold_params_2022-12-06.tar" || { echo -e "Error: Failed to download AlphaFold2 weights"; exit 1; }
# [ -s "${params_file}" ] || { echo -e "Error: Could not locate downloaded AlphaFold2 weights"; exit 1; }

# # extract AF2 weights
# tar tf "${params_file}" >/dev/null 2>&1 || { echo -e "Error: Corrupt AlphaFold2 weights download"; exit 1; }
# tar -xvf "${params_file}" -C "${params_dir}" || { echo -e "Error: Failed to extract AlphaFold2weights"; exit 1; }
# [ -f "${params_dir}/params_model_5_ptm.npz" ] || { echo -e "Error: Could not locate extracted AlphaFold2 weights"; exit 1; }
# rm "${params_file}" || { echo -e "Warning: Failed to remove AlphaFold2 weights archive"; }

之后需要将params文件下载到$HOME/install/bindcraft/params,在运行docker的时候将其挂载上去

运行容器:

docker run -it --rm --gpus all -v $HOME/install/bindcraft/params:/app/Bindcraft/params bincraft