Lab00 - Prepare AI Packages for Lab Instructions
0.1 TensorFlow 2 Environment Setup Using Docker on Windows
TensorFlow 2 Environment Setup Using Docker on Windows
1. Unzip the Provided Files
Download the provided ZIP file and extract it to your working directory. (ai_tenserflow2.zip)
The folder structure should look like this:
ai_tenserflow2/
|-- Docker/
| |-- Dockerfile.cpu
| |-- Dockerfile.gpu
| |-- docker compose.yml
| `-- requirements.txt
|-- notebooks/
|-- data/
|-- src/
`-- saved_model/
2. Build the Docker Images
- Open Windows PowerShell.
- Change directory to the
Docker
folder in the extracted project.
cd path\to\ai_tenserflow2\Docker
To build both CPU and GPU images, run:
docker compose build
To build only the CPU image:
docker compose build tf-cpu
To build only the GPU image:
docker compose build tf-gpu
3. Launch the Jupyter Server
To start the CPU container:
docker compose up tf-cpu
To start the GPU container:
docker compose up tf-gpu
4. Access JupyterLab
- If you started the CPU container, open your browser and go to http://localhost:8888/lab
- If you started the GPU container, open your browser and go to http://localhost:8888/lab
- There is no token or password required.
5. Using JupyterLab
- All your notebooks are in the
notebooks
folder inside the project. - You can open and run these notebooks directly from the JupyterLab interface.
- If you see a message saying the notebook is "not trusted", you can trust it by clicking the "Not Trusted" button and selecting "Trust Notebook".
6. Testing Your Environment
- Open the provided test notebook and run the first few code cells to verify that TensorFlow, pydot, and graphviz are installed and working.
- If you encounter any errors about missing packages or library support, please re-check your build steps or ask your instructor for help.
7. Stopping the Containers
To stop the running container, press Ctrl+C in the PowerShell window.
If needed, you can stop containers with the following command in another PowerShell window:
docker compose stop tf-cpu
docker compose stop tf-gpu
8. Re-launching Containers
To start the containers again (without rebuilding), use:
docker compose start tf-cpu
docker compose start tf-gpu
9. Docker Command Summary Table
Task | Command |
---|---|
Build both images | docker compose build |
Build CPU image | docker compose build tf-cpu |
Build GPU image | docker compose build tf-gpu |
Launch CPU container | docker compose up tf-cpu |
Launch GPU container | docker compose up tf-gpu |
Relaunch CPU | docker compose start tf-cpu |
Relaunch GPU | docker compose start tf-gpu |
Recreate CPU | docker compose up --force-recreate tf-cpu |
Recreate GPU | docker compose up --force-recreate tf-gpu |
Stop CPU | docker compose stop tf-cpu |
Stop GPU | docker compose stop tf-gpu |
Remove CPU | docker compose rm tf-cpu |
Remove GPU | docker compose rm tf-gpu |
10. Notes
- Always use the /lab path for the JupyterLab interface.
- All required code and setup files are already included in your ZIP package. You do not need to edit the Dockerfiles unless instructed.
- If you want to use the GPU container, make sure your computer supports NVIDIA Docker and GPU passthrough.
0.2 PyTorch Deep Learning Environment with Docker on Windows
PyTorch Environment Setup Using Docker on Windows
1. Introduction
In this lab, you will learn how to set up a modern PyTorch environment using Docker on Windows. You will build and run both CPU and GPU-enabled Docker containers. The setup allows you to conveniently run JupyterLab in your browser for developing and testing AI projects.
2. Project Folder Structure
Create the following project folder structure before you begin. All code and configuration files will be provided in a ZIP file.
ai_PyTorch/
|-- Docker/
| |-- Dockerfile.cpu
| |-- Dockerfile.gpu
| |-- docker compose.yml
| `-- requirements.txt
|-- notebooks/
|-- data/
|-- src/
`-- saved_model/
3. Prerequisites
- Windows (64-bit)
- Docker Desktop installed and running (version 28.3.2 or newer recommended)
- For GPU support: A compatible NVIDIA GPU, CUDA drivers, and Docker GPU support enabled
4. Building Docker Images
Change your working directory to the Docker folder inside your project.
- Build Both CPU and GPU Images
docker compose build
- Build a CPU-only Image
docker compose build pytorch-cpu
- Build a GPU-enabled Image
docker compose build pytorch-gpu
5. Running Docker Containers
You can start the desired container depending on your hardware.
- To Start the CPU Container
docker compose up pytorch-cpu
- To Start the GPU Container
docker compose up pytorch-gpu
When running, your local ai_PyTorch folder will be mapped to /lab inside the container, so all your files are accessible to JupyterLab and your Python scripts.
6. Accessing JupyterLab
Once the container is running, open your browser and go to:
http://127.0.0.1:8888/lab
No token or password is required.
7. Testing Your PyTorch Installation
Open a new notebook in JupyterLab and run the following code cell to test your PyTorch installation and check for GPU availability:
import torch
print('PyTorch version:', torch.__version__)
print('CUDA available:', torch.cuda.is_available())
If CUDA is available and you are running the GPU container, the output should show CUDA available: True. For the CPU container, it will display False.
8. Stopping and Restarting Containers
To stop the running container, use:
docker compose down
To restart a previously created container, use:
docker start pytorch_cpu
or
docker start pytorch_gpu
9. Summary
You now have a reproducible PyTorch environment ready for use on Windows, with full support for both CPU and NVIDIA GPU computation. All work can be performed in JupyterLab (browser) or in PyCharm connected to the Dockerized Jupyter server.
If you have installation issues, check your Docker Desktop version, ensure WSL2 and GPU support are enabled, and confirm you are running the correct container for your hardware.