Add swarm config

This commit is contained in:
lif
2025-10-10 19:36:51 +01:00
parent 520fef57a1
commit 069ac709e1
9 changed files with 594 additions and 285 deletions

View File

@@ -1,8 +1,8 @@
# Bare Bones Vagrant Makefile
.PHONY: help start stop destroy status ssh-host ssh-machine1 ssh-machine2 ssh-machine3 ssh-machine4 clean \
ansible-ping ansible-setup ansible-deploy ansible-list ansible-facts \
reset-full reset-destroy reset-start reset-test reset-ssh reset-ansible reset-setup reset-deploy
.PHONY: help start stop destroy status ssh-manager ssh-worker1 ssh-worker2 ssh-worker3 clean \
ansible-ping ansible-setup ansible-deploy ansible-list ansible-facts \
reset-full reset-destroy reset-start reset-test reset-ssh reset-ansible reset-setup reset-deploy
# Default target
help: ## Show this help message
@@ -28,25 +28,21 @@ status: ## Show machine status
@echo "Showing machine status..."
./manage.sh status
ssh-host: ## Access host machine via SSH
@echo "Accessing host machine..."
./manage.sh ssh host
ssh-manager: ## Access swarm manager via SSH
@echo "Accessing swarm manager..."
./manage.sh ssh swarm-manager
ssh-machine1: ## Access machine1 via SSH
@echo "Accessing machine1..."
./manage.sh ssh machine1
ssh-worker1: ## Access swarm worker1 via SSH
@echo "Accessing swarm worker1..."
./manage.sh ssh swarm-worker1
ssh-machine2: ## Access machine2 via SSH
@echo "Accessing machine2..."
./manage.sh ssh machine2
ssh-worker2: ## Access swarm worker2 via SSH
@echo "Accessing swarm worker2..."
./manage.sh ssh swarm-worker2
ssh-machine3: ## Access machine3 via SSH
@echo "Accessing machine3..."
./manage.sh ssh machine3
ssh-machine4: ## Access machine4 via SSH
@echo "Accessing machine4..."
./manage.sh ssh machine4
ssh-worker3: ## Access swarm worker3 via SSH
@echo "Accessing swarm worker3..."
./manage.sh ssh swarm-worker3
clean: ## Clean up temporary files
@echo "Cleaning up temporary files..."
@@ -54,11 +50,10 @@ clean: ## Clean up temporary files
@echo "Cleanup complete!"
# Quick access targets
host: ssh-host ## Alias for ssh-host
m1: ssh-machine1 ## Alias for ssh-machine1
m2: ssh-machine2 ## Alias for ssh-machine2
m3: ssh-machine3 ## Alias for ssh-machine3
m4: ssh-machine4 ## Alias for ssh-machine4
manager: ssh-manager ## Alias for ssh-manager
w1: ssh-worker1 ## Alias for ssh-worker1
w2: ssh-worker2 ## Alias for ssh-worker2
w3: ssh-worker3 ## Alias for ssh-worker3
# Ansible targets
ansible-ping: ## Test Ansible connectivity to all hosts
@@ -71,8 +66,8 @@ ansible-setup: ## Run setup playbook to install dependencies
@echo "Running setup playbook..."
ansible-playbook -i inventory setup-playbook.yml
ansible-deploy: ## Run deployment playbook
@echo "Running deployment playbook..."
ansible-deploy: ## Run Docker Swarm deployment playbook
@echo "Running Docker Swarm deployment playbook..."
ansible-playbook -i inventory deploy-playbook.yml
ansible-list: ## List all hosts in inventory

View File

@@ -1,42 +1,48 @@
# Bare Bones Vagrant Setup
# Docker Swarm Vagrant Cluster
A **ultra-lightweight** Vagrant setup with 4 machines and a host for basic testing and development.
A **production-ready** Docker Swarm cluster with 1 manager and 3 worker nodes for container orchestration and deployment.
## ⚡ Ultra-Lightweight Features
## 🐳 Docker Swarm Features
- **512MB RAM per machine** - Minimal memory footprint
- **Debian Linux base** - ~150MB base image, ~400MB with tools
- **No provisioning scripts** - Pure Debian base
- **No shared folders** - Disabled for performance
- **Minimal network** - Just basic connectivity
- **Fast startup** - Debian boots quickly
- **1 Swarm Manager** - Cluster orchestration and management
- **3 Swarm Workers** - Container execution and scaling
- **Overlay Networking** - Secure multi-host container communication
- **Service Discovery** - Built-in DNS and load balancing
- **High Availability** - Automatic failover and service recovery
- **Portainer UI** - Web-based cluster management interface
- **Traefik** - Reverse proxy with automatic service discovery
## 🏗️ Architecture
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
host │ │ machine1 │ │ machine2 │ │ machine3 │
│ 192.168.56.1│ │192.168.56.10│ │192.168.56.11│ │192.168.56.12
│ │ │ │ │ │ │ │
│ - Host │ │ - Machine 1 │ │ - Machine 2 │ │ - Machine 3
│ - Gateway │ │ - Debian │ │ - Debian │ │ - Debian
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
┌─────────────┐
│ machine4 │
192.168.56.13
│ │
│ - Machine 4
│ - Debian │
└─────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────
swarm-manager │ │ swarm-worker1 │ │ swarm-worker2 │ │ swarm-worker3 │
│ 192.168.56.10 │ │ 192.168.56.11 │ │ 192.168.56.12 │ │ 192.168.56.13
│ │ │ │ │ │
│ - Swarm Manager │ │ - Swarm Worker │ │ - Swarm Worker │ │ - Swarm Worker
│ - Portainer UI │ │ - Container │ │ - Container │ │ - Container
│ - Traefik Proxy │ │ Execution │ │ Execution │ │ Execution │
│ - Service │ - Load │ - Load │ - Load
Discovery Balancing Balancing Balancing │
└─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘
└───────────────────────┼───────────────────────┼───────────────────────┘
┌─────────────┴───────────────────────┴─────────────┐
Docker Swarm Overlay Network │
│ - Service Discovery │
│ - Load Balancing │
│ - Secure Communication │
└─────────────────────────────────────────────────────┘
```
## 📋 Prerequisites
- **Vagrant** 2.2+
- **VirtualBox** 6.0+ or **libvirt** (KVM)
- **3GB+ RAM** (512MB per machine)
- **4GB+ free disk space**
- **Ansible** 2.9+
- **2GB+ RAM** (512MB per machine + 1GB swap)
- **6GB+ free disk space**
## 🚀 Quick Start
@@ -52,8 +58,8 @@ A **ultra-lightweight** Vagrant setup with 4 machines and a host for basic testi
3. **Access a machine:**
```bash
make ssh-host
make ssh-machine1
make ssh-manager
make ssh-worker1
```
## 🎛️ Management Commands
@@ -64,11 +70,10 @@ make start # Start all machines
make stop # Stop all machines
make destroy # Destroy all machines
make status # Show machine status
make ssh-host # Access host machine
make ssh-machine1 # Access machine1
make ssh-machine2 # Access machine2
make ssh-machine3 # Access machine3
make ssh-machine4 # Access machine4
make ssh-manager # Access swarm manager
make ssh-worker1 # Access swarm worker1
make ssh-worker2 # Access swarm worker2
make ssh-worker3 # Access swarm worker3
```
### Using Management Script
@@ -77,8 +82,8 @@ make ssh-machine4 # Access machine4
./manage.sh stop # Stop all machines
./manage.sh destroy # Destroy all machines
./manage.sh status # Show machine status
./manage.sh ssh host # Access host machine
./manage.sh ssh machine1 # Access machine1
./manage.sh ssh swarm-manager # Access swarm manager
./manage.sh ssh swarm-worker1 # Access swarm worker1
```
### Using Vagrant Directly
@@ -87,19 +92,18 @@ vagrant up # Start all machines
vagrant halt # Stop all machines
vagrant destroy -f # Destroy all machines
vagrant status # Show machine status
vagrant ssh host # Access host machine
vagrant ssh machine1 # Access machine1
vagrant ssh swarm-manager # Access swarm manager
vagrant ssh swarm-worker1 # Access swarm worker1
```
## 🌐 Network Configuration
- **Host**: 192.168.56.1
- **Machine 1**: 192.168.56.10
- **Machine 2**: 192.168.56.11
- **Machine 3**: 192.168.56.12
- **Machine 4**: 192.168.56.13
- **Swarm Manager**: 192.168.56.10
- **Swarm Worker 1**: 192.168.56.11
- **Swarm Worker 2**: 192.168.56.12
- **Swarm Worker 3**: 192.168.56.13
All machines are connected via a private network and can communicate with each other.
All machines are connected via a private network and communicate through Docker Swarm overlay networking.
## 🔧 Machine Specifications

136
Vagrantfile vendored
View File

@@ -32,88 +32,76 @@ Vagrant.configure("2") do |config|
libvirt.connect_via_ssh = false
end
# Host Machine
config.vm.define "host" do |host|
host.vm.hostname = "host"
host.vm.network "private_network", ip: "192.168.56.1"
# Swarm Manager
config.vm.define "swarm-manager" do |manager|
manager.vm.hostname = "swarm-manager"
manager.vm.network "private_network", ip: "192.168.56.10"
# Port forwarding for Docker Swarm services
manager.vm.network "forwarded_port", guest: 9000, host: 19000, id: "portainer"
manager.vm.network "forwarded_port", guest: 8080, host: 18080, id: "traefik"
manager.vm.network "forwarded_port", guest: 80, host: 18081, id: "webapp"
host.vm.provider "virtualbox" do |vb|
vb.name = "host"
vb.memory = "512"
vb.cpus = 1
end
manager.vm.provider "virtualbox" do |vb|
vb.name = "swarm-manager"
vb.memory = "512"
vb.cpus = 1
end
host.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
# Machine 1
config.vm.define "machine1" do |machine1|
machine1.vm.hostname = "machine1"
machine1.vm.network "private_network", ip: "192.168.56.10"
manager.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
machine1.vm.provider "virtualbox" do |vb|
vb.name = "machine1"
vb.memory = "512"
vb.cpus = 1
end
# Swarm Worker 1
config.vm.define "swarm-worker1" do |worker1|
worker1.vm.hostname = "swarm-worker1"
worker1.vm.network "private_network", ip: "192.168.56.11"
machine1.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
# Machine 2
config.vm.define "machine2" do |machine2|
machine2.vm.hostname = "machine2"
machine2.vm.network "private_network", ip: "192.168.56.11"
worker1.vm.provider "virtualbox" do |vb|
vb.name = "swarm-worker1"
vb.memory = "512"
vb.cpus = 1
end
machine2.vm.provider "virtualbox" do |vb|
vb.name = "machine2"
vb.memory = "512"
vb.cpus = 1
end
worker1.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
machine2.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
# Machine 3
config.vm.define "machine3" do |machine3|
machine3.vm.hostname = "machine3"
machine3.vm.network "private_network", ip: "192.168.56.12"
# Swarm Worker 2
config.vm.define "swarm-worker2" do |worker2|
worker2.vm.hostname = "swarm-worker2"
worker2.vm.network "private_network", ip: "192.168.56.12"
machine3.vm.provider "virtualbox" do |vb|
vb.name = "machine3"
vb.memory = "512"
vb.cpus = 1
end
worker2.vm.provider "virtualbox" do |vb|
vb.name = "swarm-worker2"
vb.memory = "512"
vb.cpus = 1
end
machine3.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
# Machine 4
config.vm.define "machine4" do |machine4|
machine4.vm.hostname = "machine4"
machine4.vm.network "private_network", ip: "192.168.56.13"
worker2.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
machine4.vm.provider "virtualbox" do |vb|
vb.name = "machine4"
vb.memory = "512"
vb.cpus = 1
end
# Swarm Worker 3
config.vm.define "swarm-worker3" do |worker3|
worker3.vm.hostname = "swarm-worker3"
worker3.vm.network "private_network", ip: "192.168.56.13"
machine4.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
worker3.vm.provider "virtualbox" do |vb|
vb.name = "swarm-worker3"
vb.memory = "512"
vb.cpus = 1
end
worker3.vm.provider "libvirt" do |libvirt|
libvirt.memory = 512
libvirt.cpus = 1
end
end
end

View File

@@ -1,132 +1,159 @@
---
# Deployment Playbook for Debian Linux
# This playbook deploys applications and services
# Docker Swarm Deployment Playbook
# This playbook initializes Docker Swarm cluster and deploys services
- name: Deploy applications on Debian Linux
hosts: alpine
- name: Initialize Docker Swarm Manager
hosts: swarm_managers
become: yes
gather_facts: yes
tasks:
- name: Update apt package index
apt:
update_cache: yes
cache_valid_time: 3600
- name: Check if Docker Swarm is already initialized
command: docker info --format "{{ '{{' }}.Swarm.LocalNodeState{{ '}}' }}"
register: swarm_status_check
changed_when: false
failed_when: false
- name: Install Docker
apt:
name:
- docker.io
- docker-compose
state: present
- name: Initialize Docker Swarm
command: docker swarm init --advertise-addr 192.168.56.10
register: swarm_init_result
changed_when: swarm_init_result.rc == 0
failed_when: swarm_init_result.rc not in [0, 1]
when: swarm_status_check.stdout != "active"
- name: Add vagrant user to docker group
user:
name: vagrant
groups: docker
append: yes
- name: Start and enable Docker service
systemd:
name: docker
state: started
enabled: yes
- name: Test Docker installation
command: docker --version
register: docker_version
- name: Get worker join token
command: docker swarm join-token worker
register: worker_token_result
changed_when: false
- name: Show Docker version
- name: Extract worker join command
set_fact:
worker_join_token: "{{ worker_token_result.stdout_lines[2] }}"
- name: Display worker join command
debug:
msg: "{{ docker_version.stdout }}"
msg: "Worker join command: {{ worker_join_token }}"
- name: Pull a lightweight test image
docker_image:
name: alpine:latest
source: pull
- name: Get manager join token
command: docker swarm join-token manager
register: manager_token_result
changed_when: false
- name: Run a test container
docker_container:
name: test-container
image: alpine:latest
command: echo "Docker is working on {{ inventory_hostname }}!"
state: present
auto_remove: yes
- name: Display manager join command
debug:
msg: "Manager join command: {{ manager_token_result.stdout_lines[2] }}"
- name: Create application directory
file:
path: /opt/app
state: directory
mode: '0755'
- name: Create sample application
- name: Copy Docker Compose stack file
copy:
content: |
#!/bin/bash
echo "Hello from {{ inventory_hostname }}!"
echo "Running on Debian Linux"
echo "Memory: $(free -m | grep Mem | awk '{print $2}')MB"
echo "Disk: $(df -h / | tail -1 | awk '{print $2}')"
dest: /opt/app/hello.sh
mode: '0755'
- name: Create systemd service for sample app
copy:
content: |
[Unit]
Description=Sample Application
After=network.target
[Service]
Type=simple
User=vagrant
ExecStart=/opt/app/hello.sh
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/sample-app.service
src: docker-stack.yml
dest: /home/vagrant/docker-stack.yml
mode: '0644'
- name: Reload systemd daemon
systemd:
daemon_reload: yes
- name: Deploy Docker Swarm stack
command: docker stack deploy -c docker-stack.yml swarm-stack
register: stack_result
changed_when: stack_result.rc == 0
failed_when: stack_result.rc not in [0, 1]
- name: Enable sample application service
systemd:
name: sample-app
enabled: yes
state: started
- name: Check Docker Swarm status
command: docker node ls
register: swarm_status
changed_when: false
- name: Check service status
command: systemctl status sample-app
register: service_status
- name: Display Swarm status
debug:
msg: "{{ swarm_status.stdout_lines }}"
- name: Check Docker stack services
command: docker stack services swarm-stack
register: services_status
changed_when: false
- name: Display stack services status
debug:
msg: "{{ services_status.stdout_lines }}"
- name: Join Docker Swarm Workers
hosts: swarm_workers
become: yes
gather_facts: no
tasks:
- name: Join Docker Swarm as worker
command: "{{ hostvars[groups['swarm_managers'][0]]['worker_join_token'] | replace('10.0.2.15:2377', '192.168.56.10:2377') }}"
register: join_result
changed_when: join_result.rc == 0
failed_when: join_result.rc not in [0, 1]
- name: Verify node joined successfully
command: docker node ls
register: node_status
changed_when: false
ignore_errors: yes
- name: Show service status
- name: Display node status
debug:
msg: "{{ service_status.stdout_lines }}"
- name: Create deployment info file
copy:
content: |
Deployment completed on {{ inventory_hostname }}
Date: {{ ansible_date_time.iso8601 }}
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
Architecture: {{ ansible_architecture }}
Memory: {{ ansible_memtotal_mb }}MB
Docker: {{ docker_version.stdout }}
dest: /opt/app/deployment-info.txt
mode: '0644'
- name: Display deployment info
command: cat /opt/app/deployment-info.txt
register: deployment_info
msg: "{{ node_status.stdout_lines if node_status.rc == 0 else 'Node not accessible' }}"
- name: Verify Docker Swarm Cluster
hosts: swarm_managers
become: yes
gather_facts: no
tasks:
- name: Wait for all nodes to be ready
command: docker node ls
register: nodes_check
until: nodes_check.stdout_lines | length >= 5 # Header + 4 nodes
retries: 10
delay: 5
changed_when: false
- name: Show deployment info
- name: Check all nodes are active
command: docker node ls --format "{{ '{{' }}.Status{{ '}}' }}"
register: node_statuses
changed_when: false
- name: Verify all nodes are ready
assert:
that:
- "'Ready' in node_statuses.stdout"
- "'Active' in node_statuses.stdout"
fail_msg: "Not all nodes are ready and active"
- name: Check stack service health
command: docker stack services swarm-stack --format "table {{.Name}}\t{{.Replicas}}"
register: service_replicas
changed_when: false
- name: Display stack service replicas
debug:
msg: "{{ deployment_info.stdout_lines }}"
msg: "{{ service_replicas.stdout_lines }}"
- name: Create cluster info file
copy:
content: |
Docker Swarm Cluster Information
================================
Manager: {{ groups['swarm_managers'][0] }}
Workers: {{ groups['swarm_workers'] | join(', ') }}
Total Nodes: {{ groups['swarm_nodes'] | length }}
Services Deployed:
- Portainer (Management UI): http://{{ ansible_default_ipv4.address }}:9000
- Traefik Dashboard: http://{{ ansible_default_ipv4.address }}:8080
- Web Application: http://{{ ansible_default_ipv4.address }}
Network: swarm-network (overlay)
Created: {{ ansible_date_time.iso8601 }}
dest: /opt/swarm-cluster-info.txt
mode: '0644'
- name: Display cluster information
command: cat /opt/swarm-cluster-info.txt
register: cluster_info
changed_when: false
- name: Show cluster information
debug:
msg: "{{ cluster_info.stdout_lines }}"

74
docker-stack.yml Normal file
View File

@@ -0,0 +1,74 @@
version: '3.8'
services:
portainer:
image: portainer/portainer-ce:latest
ports:
- "9000:9000"
- "9443:9443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- swarm-network
traefik:
image: traefik:v2.10
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command:
- --api.dashboard=true
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- swarm-network
web-app:
image: nginx:alpine
deploy:
replicas: 3
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.http.routers.webapp.rule=Host(`192.168.56.10`)
- traefik.http.services.webapp.loadbalancer.server.port=80
networks:
- swarm-network
hello-world:
image: hello-world:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- swarm-network
volumes:
portainer_data:
driver: local
networks:
swarm-network:
driver: overlay
attachable: true

View File

@@ -1,22 +1,21 @@
# Ansible Inventory for Alpine Vagrant Cluster
# Ansible Inventory for Docker Swarm Cluster
# This file defines the hosts and groups for Ansible playbooks
[all:vars]
ansible_user=vagrant
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
[hosts]
host ansible_host=127.0.0.1 ansible_port=2222 ansible_ssh_private_key_file=.vagrant/machines/host/virtualbox/private_key
[swarm_managers]
swarm-manager ansible_host=127.0.0.1 ansible_port=2204 ansible_ssh_private_key_file=.vagrant/machines/swarm-manager/virtualbox/private_key
[machines]
machine1 ansible_host=127.0.0.1 ansible_port=2200 ansible_ssh_private_key_file=.vagrant/machines/machine1/virtualbox/private_key
machine2 ansible_host=127.0.0.1 ansible_port=2201 ansible_ssh_private_key_file=.vagrant/machines/machine2/virtualbox/private_key
machine3 ansible_host=127.0.0.1 ansible_port=2202 ansible_ssh_private_key_file=.vagrant/machines/machine3/virtualbox/private_key
machine4 ansible_host=127.0.0.1 ansible_port=2203 ansible_ssh_private_key_file=.vagrant/machines/machine4/virtualbox/private_key
[swarm_workers]
swarm-worker1 ansible_host=127.0.0.1 ansible_port=2205 ansible_ssh_private_key_file=.vagrant/machines/swarm-worker1/virtualbox/private_key
swarm-worker2 ansible_host=127.0.0.1 ansible_port=2206 ansible_ssh_private_key_file=.vagrant/machines/swarm-worker2/virtualbox/private_key
swarm-worker3 ansible_host=127.0.0.1 ansible_port=2207 ansible_ssh_private_key_file=.vagrant/machines/swarm-worker3/virtualbox/private_key
[alpine:children]
hosts
machines
[swarm_nodes:children]
swarm_managers
swarm_workers
[alpine:vars]
[swarm_nodes:vars]
ansible_python_interpreter=/usr/bin/python3

View File

@@ -112,25 +112,24 @@ show_help() {
echo " help Show this help message"
echo ""
echo "Machines:"
echo " host Host machine (192.168.56.1)"
echo " machine1 Machine 1 (192.168.56.10)"
echo " machine2 Machine 2 (192.168.56.11)"
echo " machine3 Machine 3 (192.168.56.12)"
echo " machine4 Machine 4 (192.168.56.13)"
echo " swarm-manager Swarm Manager (192.168.56.10)"
echo " swarm-worker1 Swarm Worker 1 (192.168.56.11)"
echo " swarm-worker2 Swarm Worker 2 (192.168.56.12)"
echo " swarm-worker3 Swarm Worker 3 (192.168.56.13)"
echo ""
echo "Ansible Commands:"
echo " ping Test connectivity to all hosts"
echo " setup Install dependencies (Python, tools, swap)"
echo " deploy Deploy applications and services"
echo " setup Install dependencies (Python, Docker, swap)"
echo " deploy Deploy Docker Swarm cluster"
echo " list List all hosts"
echo " facts Gather system facts"
echo ""
echo "Examples:"
echo " $0 start # Start all machines"
echo " $0 ssh host # Access host machine"
echo " $0 ssh swarm-manager # Access swarm manager"
echo " $0 ansible ping # Test Ansible connectivity"
echo " $0 ansible setup # Install dependencies"
echo " $0 ansible deploy # Deploy applications"
echo " $0 ansible deploy # Deploy Docker Swarm cluster"
}
# Main script logic

View File

@@ -109,7 +109,7 @@ start_all() {
test_ssh() {
print_header "Testing SSH Connectivity"
local machines=("host" "machine1" "machine2" "machine3" "machine4")
local machines=("swarm-manager" "swarm-worker1" "swarm-worker2" "swarm-worker3")
local failed_machines=()
for machine in "${machines[@]}"; do
@@ -231,6 +231,203 @@ run_tests() {
test_results+=("Docker: ❌ FAIL")
fi
# Test 7: Verify Docker Swarm is initialized
print_info "Verifying Docker Swarm cluster..."
if ansible swarm_managers -i inventory -m shell -a "docker node ls" >/dev/null 2>&1; then
test_results+=("Swarm: ✅ PASS")
else
test_results+=("Swarm: ❌ FAIL")
fi
# Display test results
print_header "Test Results Summary"
for result in "${test_results[@]}"; do
echo " $result"
done
# Count failures
local failures=$(printf '%s\n' "${test_results[@]}" | grep -c "❌ FAIL" || true)
if [ "$failures" -eq 0 ]; then
print_success "All tests passed! 🎉"
return 0
else
print_error "$failures test(s) failed"
return 1
fi
}
# Function to show help
show_help() {
echo "Reset and Test Script for Debian Vagrant Cluster"
echo ""
echo "Usage: $0 [COMMAND]"
echo ""
echo "Commands:"
echo " full-reset Destroy everything and run full test cycle"
echo " destroy-only Only destroy all machines"
echo " start-only Only start all machines"
echo " test-only Only run tests (assumes machines are running)"
echo " ssh-test Only test SSH connectivity"
echo " ansible-test Only test Ansible connectivity"
echo " setup-only Only run setup playbook"
echo " deploy-only Only run deployment playbook"
echo " help Show this help message"
echo ""
echo "Examples:"
echo " $0 full-reset # Complete destroy/recreate/test cycle"
echo " $0 test-only # Run tests on existing machines"
echo " $0 ssh-test # Quick SSH connectivity check"
echo ""
echo "This script will:"
echo " 1. Check prerequisites (vagrant, ansible, make)"
echo " 2. Destroy all VMs and clean up"
echo " 3. Start all VMs fresh"
echo " 4. Test SSH connectivity"
echo " 5. Test Ansible connectivity"
echo " 6. Run setup playbook (dependencies, swap)"
echo " 7. Run deployment playbook (Docker, services)"
echo " 8. Verify everything is working"
}
# Main script logic
main() {
local command=${1:-help}
case "$command" in
full-reset)
print_header "Full Reset and Test Cycle"
check_prerequisites
destroy_all
start_all
run_tests
;;
destroy-only)
print_header "Destroy Only"
check_prerequisites
destroy_all
;;
start-only)
print_header "Start Only"
check_prerequisites
start_all
;;
test-only)
print_header "Test Only"
check_prerequisites
run_tests
;;
ssh-test)
print_header "SSH Test Only"
check_prerequisites
test_ssh
;;
ansible-test)
print_header "Ansible Test Only"
check_prerequisites
test_ansible
;;
setup-only)
print_header "Setup Only"
check_prerequisites
run_setup
;;
deploy-only)
print_header "Deploy Only"
check_prerequisites
run_deployment
;;
help|--help|-h)
show_help
;;
*)
print_error "Unknown command: $command"
show_help
exit 1
;;
esac
}
# Run main function with all arguments
main "$@"
else
print_error "Setup playbook failed"
return 1
fi
}
# Function to run deployment playbook
run_deployment() {
print_header "Running Deployment Playbook"
print_info "Deploying applications and services..."
if ansible-playbook -i inventory deploy-playbook.yml; then
print_success "Deployment playbook completed successfully"
return 0
else
print_error "Deployment playbook failed"
return 1
fi
}
# Function to run comprehensive tests
run_tests() {
print_header "Running Comprehensive Tests"
local test_results=()
# Test 1: SSH Connectivity
if test_ssh; then
test_results+=("SSH: ✅ PASS")
else
test_results+=("SSH: ❌ FAIL")
fi
# Test 2: Ansible Connectivity
if test_ansible; then
test_results+=("Ansible: ✅ PASS")
else
test_results+=("Ansible: ❌ FAIL")
fi
# Test 3: Setup Playbook
if run_setup; then
test_results+=("Setup: ✅ PASS")
else
test_results+=("Setup: ❌ FAIL")
fi
# Test 4: Deployment Playbook
if run_deployment; then
test_results+=("Deployment: ✅ PASS")
else
test_results+=("Deployment: ❌ FAIL")
fi
# Test 5: Verify swap is active
print_info "Verifying swap is active..."
if ansible all -i inventory -m shell -a "cat /proc/swaps" | grep -q "swapfile"; then
test_results+=("Swap: ✅ PASS")
else
test_results+=("Swap: ❌ FAIL")
fi
# Test 6: Verify Docker is running
print_info "Verifying Docker is running..."
if ansible all -i inventory -m shell -a "docker --version" >/dev/null 2>&1; then
test_results+=("Docker: ✅ PASS")
else
test_results+=("Docker: ❌ FAIL")
fi
# Test 7: Verify Docker Swarm is initialized
print_info "Verifying Docker Swarm cluster..."
if ansible swarm_managers -i inventory -m shell -a "docker node ls" >/dev/null 2>&1; then
test_results+=("Swarm: ✅ PASS")
else
test_results+=("Swarm: ❌ FAIL")
fi
# Display test results
print_header "Test Results Summary"
for result in "${test_results[@]}"; do

View File

@@ -1,9 +1,9 @@
---
# Setup Playbook for Debian Linux
# This playbook installs essential dependencies including Python and creates swap
# Setup Playbook for Docker Swarm Cluster
# This playbook installs essential dependencies including Python, Docker, and creates swap
- name: Setup Debian Linux hosts
hosts: alpine
- name: Setup Docker Swarm nodes
hosts: swarm_nodes
become: yes
gather_facts: no
@@ -28,15 +28,15 @@
name:
- python3
- python3-pip
- vim
- ansible
- curl
- wget
- htop
- tree
- git
- openssh-client
- sudo
- util-linux
- apt-transport-https
- ca-certificates
- gnupg
- lsb-release
state: present
- name: Create sudoers entry for vagrant user
@@ -46,12 +46,38 @@
create: yes
mode: '0440'
- name: Install Python packages
pip:
name:
- ansible
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
become_user: vagrant
- name: Add Docker repository
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/debian bookworm stable"
state: present
update_cache: yes
- name: Install Docker CE
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose-plugin
state: present
- name: Add vagrant user to docker group
user:
name: vagrant
groups: docker
append: yes
- name: Start and enable Docker service
systemd:
name: docker
state: started
enabled: yes
- name: Verify Python installation
command: python3 --version