Overview
This template provides a production‑ready TensorFlow instance as a Monk runnable. You can:- Run it directly to get a managed TensorFlow environment for ML/DL development
- Inherit it in your own ML applications to add training and inference capabilities
What this template manages
- TensorFlow container (
tensorflow/tensorflowimage) - Jupyter notebook server (optional)
- GPU acceleration support (with tensorflow-gpu)
- Python environment with TensorFlow libraries
- Model serving with TensorFlow Serving
Quick start (run directly)
- Load templates
- Run TensorFlow with defaults
- Access Jupyter notebook (if enabled)
http://localhost:8888 and use the token from logs.
- Customize configuration (recommended via inheritance)
variables. Secrets added with monk secrets add will not affect this runnable unless you inherit it and reference those secrets.
- Preferred: inherit and replace variables with
secret("...")as shown below. - Alternative: fork/clone and edit the
variablesintensorflow/tensorflow.yml, thenmonk load MANIFESTand run.
Configuration
Key variables you can customize in this template:${monk-volume-path}/tensorflow on the host. Custom configuration can be mounted to the container as needed.
Use by inheritance (recommended for ML apps)
Inherit the TensorFlow runnable in your application and declare a connection. Example:Ports and connectivity
- Service:
tensorflow(configurable ports) - Jupyter: TCP port
8888(if enabled) - TensorFlow Serving: TCP port
8501(REST API) - gRPC: TCP port
8500(for serving) - From other runnables in the same process group, use
connection-hostname("\<connection-name>")to resolve the TensorFlow host.
Persistence and configuration
- Notebooks path:
${monk-volume-path}/tensorflow/notebooks - Models path:
${monk-volume-path}/tensorflow/models - Data path:
${monk-volume-path}/tensorflow/data - You can mount additional configuration files and datasets to the container as needed.
Features
- Complete ML/DL framework
- Keras high-level API
- Eager execution for intuitive development
- GPU and TPU acceleration
- TensorFlow Serving for production inference
- TensorFlow Lite for mobile and embedded
- TensorFlow.js for browser-based ML
- Distributed training support
- Model optimization and quantization
Use cases
TensorFlow excels at:- Image classification and object detection
- Natural language processing
- Time series forecasting
- Recommendation systems
- Reinforcement learning
- Generative models (GANs, VAEs)
- Any deep learning task
GPU Support
For GPU acceleration:- NVIDIA GPU with CUDA support
- NVIDIA drivers installed
- Docker GPU runtime configured
Model Serving
TensorFlow Serving provides production-ready model inference:- REST and gRPC APIs
- Model versioning
- Batching for efficiency
- Hot-reloading of models
Related templates
- See other ML/AI templates in this repository for complementary services.
- Combine with monitoring tools for observability of training jobs and model serving.
Troubleshooting
- For GPU support, verify CUDA compatibility with TensorFlow version and ensure NVIDIA drivers are properly installed.
- Ensure sufficient RAM/VRAM for your models. Monitor GPU memory usage to avoid OOM errors.
- Ensure host volumes are writable by the container user.
- Check logs:
- Check Jupyter logs for token:
- Verify TensorFlow installation inside the container: