Этот коммит содержится в:
Daniel Ari Friedman 2025-02-07 08:50:53 -08:00
родитель ccdeafd068
Коммит 008da68fc6
43 изменённых файлов: 2805 добавлений и 407 удалений

62
.gitignore поставляемый Обычный файл
Просмотреть файл

@ -0,0 +1,62 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Virtual Environment
venv/
env/
ENV/
.env
# IDEs and Editors
.idea/
.vscode/
*.swp
*.swo
*~
# OS Generated Files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Project Specific
*.log
logs/
output/
results/
data/
config/*.local.yaml
.cache/
.pytest_cache/
coverage.xml
.coverage
# Jupyter Notebooks
.ipynb_checkpoints
*.ipynb
# Documentation
docs/_build/

186
Things/Ant_Colony/Ant_Colony_README.md Обычный файл
Просмотреть файл

@ -0,0 +1,186 @@
# Ant Colony - Active Inference Multi-Agent System
## Overview
This implementation models an ant colony as a multi-agent system where each ant (Nestmate) operates according to the Free Energy Principle (FEP). The system demonstrates how individual agents following active inference can give rise to emergent collective behaviors and self-organization.
## Core Components
### 1. Nestmate Agent
Individual ant agents that implement active inference:
- Sensory observations (pheromones, food, obstacles, nestmates)
- Internal generative model of environment
- Action selection through FEP
- Belief updating and learning
- Pheromone deposition behaviors
### 2. Colony Environment
Shared environment where agents interact:
- Pheromone diffusion and evaporation
- Food source dynamics
- Obstacle and terrain features
- Nest structure and organization
- Physical constraints and interactions
### 3. Multi-Agent Framework
System for managing agent interactions:
- Agent communication protocols
- Spatial relationships
- Task allocation
- Resource distribution
- Collective decision making
## Implementation Details
### Nestmate Agent Model
```python
class Nestmate:
def __init__(self):
# State space
self.position = None # Physical location
self.orientation = None # Heading direction
self.carrying = None # What agent is carrying
self.energy = 1.0 # Energy level
# Sensory inputs
self.observations = {
'pheromone': None, # Pheromone gradients
'food': None, # Food sources
'nestmates': None, # Other agents
'obstacles': None # Environmental obstacles
}
# Internal model parameters
self.beliefs = None # Current belief state
self.preferences = None # Goal-directed preferences
self.policies = None # Action policies
# Learning parameters
self.learning_rate = 0.1
self.exploration_rate = 0.2
```
### Key Features
1. Active Inference Implementation
- Hierarchical generative models
- Precision-weighted prediction errors
- Free energy minimization
- Action-perception cycles
- Belief updating through message passing
2. Pheromone System
- Multiple pheromone types
- Diffusion mechanics
- Evaporation rates
- Gradient following
- Trail reinforcement
3. Task Allocation
- Foraging
- Nest maintenance
- Brood care
- Defense
- Exploration
4. Learning & Adaptation
- Individual learning
- Social learning
- Environmental adaptation
- Task switching
- Skill development
## Configuration
The system is configured through YAML files:
1. agent_config.yaml - Individual agent parameters
2. colony_config.yaml - Colony-level parameters
3. environment_config.yaml - Environmental settings
4. simulation_config.yaml - Simulation parameters
## Usage
```python
from ant_colony import Colony, Environment, Simulation
# Initialize environment
env = Environment(config_path='config/environment_config.yaml')
# Create colony
colony = Colony(
num_agents=100,
environment=env,
config_path='config/colony_config.yaml'
)
# Run simulation
sim = Simulation(
colony=colony,
config_path='config/simulation_config.yaml'
)
sim.run(timesteps=1000)
```
## Analysis Tools
1. Behavioral Analysis
- Task distribution metrics
- Spatial patterns
- Temporal dynamics
- Efficiency measures
2. Network Analysis
- Interaction networks
- Information flow
- Task networks
- Spatial networks
3. Collective Intelligence Metrics
- Emergence measures
- Synchronization indices
- Collective decision accuracy
- Adaptation rates
## Visualization
1. Real-time Visualization
- Agent positions and movements
- Pheromone gradients
- Resource distribution
- Task allocation
2. Analysis Plots
- Behavioral statistics
- Learning curves
- Network diagrams
- Performance metrics
## Project Structure
```
ant_colony/
├── agents/
│ ├── nestmate.py
│ ├── sensor.py
│ └── actuator.py
├── environment/
│ ├── world.py
│ ├── pheromone.py
│ └── resources.py
├── models/
│ ├── generative_model.py
│ ├── belief_update.py
│ └── policy_selection.py
├── analysis/
│ ├── metrics.py
│ ├── network.py
│ └── visualization.py
├── config/
│ ├── agent_config.yaml
│ ├── colony_config.yaml
│ ├── environment_config.yaml
│ └── simulation_config.yaml
└── tests/
├── test_agent.py
├── test_environment.py
└── test_simulation.py
```

335
Things/Ant_Colony/agents/nestmate.py Обычный файл
Просмотреть файл

@ -0,0 +1,335 @@
"""
Nestmate Agent Implementation
This module implements the Nestmate agent class, which represents an individual ant
in the colony using the Free Energy Principle (FEP) and Active Inference framework.
"""
import numpy as np
from typing import Dict, List, Tuple, Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
from dataclasses import dataclass
from enum import Enum
class TaskType(Enum):
"""Possible task types for a Nestmate agent."""
FORAGING = "foraging"
MAINTENANCE = "maintenance"
NURSING = "nursing"
DEFENSE = "defense"
EXPLORATION = "exploration"
@dataclass
class Observation:
"""Container for sensory observations."""
pheromone: np.ndarray # Pheromone gradients
food: np.ndarray # Food source locations
nestmates: np.ndarray # Other agent positions
obstacles: np.ndarray # Obstacle positions
nest: np.ndarray # Nest location/gradient
class GenerativeModel(nn.Module):
"""Hierarchical generative model for active inference."""
def __init__(self, config: dict):
super().__init__()
# Model dimensions
self.obs_dim = config['dimensions']['observations']
self.state_dim = config['dimensions']['states']
self.action_dim = config['dimensions']['actions']
self.temporal_horizon = config['dimensions']['planning_horizon']
# Hierarchical layers
self.layers = nn.ModuleList([
nn.Linear(self.state_dim, self.state_dim)
for _ in range(config['active_inference']['model']['hierarchical_levels'])
])
# State transition model (dynamics)
self.transition = nn.Sequential(
nn.Linear(self.state_dim + self.action_dim, self.state_dim * 2),
nn.ReLU(),
nn.Linear(self.state_dim * 2, self.state_dim)
)
# Observation model
self.observation = nn.Sequential(
nn.Linear(self.state_dim, self.obs_dim * 2),
nn.ReLU(),
nn.Linear(self.obs_dim * 2, self.obs_dim)
)
# Policy network
self.policy = nn.Sequential(
nn.Linear(self.state_dim, self.action_dim * 2),
nn.ReLU(),
nn.Linear(self.action_dim * 2, self.action_dim)
)
# Precision parameters
self.alpha = nn.Parameter(torch.ones(1)) # Precision of beliefs
self.beta = nn.Parameter(torch.ones(1)) # Precision of policies
def forward(self,
state: torch.Tensor,
action: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor]:
"""Forward pass through the generative model."""
# Hierarchical state processing
for layer in self.layers:
state = F.relu(layer(state))
# Generate observations
predicted_obs = self.observation(state)
# If action provided, predict next state
if action is not None:
state_action = torch.cat([state, action], dim=-1)
next_state = self.transition(state_action)
return predicted_obs, next_state
return predicted_obs, None
def infer_state(self,
obs: torch.Tensor,
prev_state: Optional[torch.Tensor] = None,
n_steps: int = 10) -> torch.Tensor:
"""Infer hidden state through iterative message passing."""
if prev_state is None:
state = torch.zeros(obs.shape[0], self.state_dim)
else:
state = prev_state
state.requires_grad = True
optimizer = torch.optim.Adam([state], lr=0.1)
for _ in range(n_steps):
optimizer.zero_grad()
# Prediction errors
pred_obs, _ = self.forward(state)
obs_error = F.mse_loss(pred_obs, obs)
if prev_state is not None:
state_error = F.mse_loss(state, prev_state)
loss = obs_error + self.alpha * state_error
else:
loss = obs_error
loss.backward()
optimizer.step()
return state.detach()
def select_action(self,
state: torch.Tensor,
temperature: float = 1.0) -> torch.Tensor:
"""Select action using active inference."""
# Get action distribution
action_logits = self.policy(state)
action_probs = F.softmax(action_logits / temperature, dim=-1)
# Sample action
action = torch.multinomial(action_probs, 1)
return action
class Nestmate:
"""
Individual ant agent implementing active inference for decision making.
"""
def __init__(self, config: dict):
"""Initialize Nestmate agent."""
self.config = config
# Physical state
self.position = np.zeros(2)
self.velocity = np.zeros(2)
self.orientation = 0.0
self.energy = config['physical']['energy']['initial']
# Task state
self.current_task = TaskType.EXPLORATION
self.carrying = None
# Sensory state
self.observations = Observation(
pheromone=np.zeros(config['sensors']['pheromone']['types'].__len__()),
food=np.zeros(2),
nestmates=np.zeros(2),
obstacles=np.zeros(2),
nest=np.zeros(2)
)
# Active inference components
self.generative_model = GenerativeModel(config)
self.current_state = None
self.previous_action = None
# Memory
self.memory = {
'spatial': [],
'temporal': [],
'social': []
}
# Learning parameters
self.learning_rate = config['learning']['parameters']['learning_rate']
self.exploration_rate = config['learning']['parameters']['exploration_rate']
def update(self, observation: Observation) -> np.ndarray:
"""
Update agent state and select action using active inference.
Args:
observation: Current sensory observations
Returns:
action: Selected action as numpy array
"""
# Convert observation to tensor
obs_tensor = torch.tensor(self._preprocess_observation(observation))
# State inference
inferred_state = self.generative_model.infer_state(
obs_tensor,
prev_state=self.current_state
)
self.current_state = inferred_state
# Action selection
action = self.generative_model.select_action(
inferred_state,
temperature=self.config['active_inference']['free_energy']['temperature']
)
# Update memory
self._update_memory(observation, action)
# Update internal state
self._update_internal_state()
return action.numpy()
def _preprocess_observation(self, observation: Observation) -> np.ndarray:
"""Preprocess raw observations into model input format."""
# Combine all observations into single vector
obs_vector = np.concatenate([
observation.pheromone,
observation.food,
observation.nestmates,
observation.obstacles,
observation.nest
])
# Normalize
obs_vector = (obs_vector - obs_vector.mean()) / (obs_vector.std() + 1e-8)
return obs_vector
def _update_memory(self, observation: Observation, action: torch.Tensor):
"""Update agent's memory systems."""
# Spatial memory
self.memory['spatial'].append({
'position': self.position.copy(),
'observation': observation,
'timestamp': None # Add actual timestamp in implementation
})
# Temporal memory
self.memory['temporal'].append({
'state': self.current_state.detach().numpy(),
'action': action.numpy(),
'reward': self._compute_reward(observation)
})
# Social memory (interactions with other agents)
if np.any(observation.nestmates):
self.memory['social'].append({
'nestmate_positions': observation.nestmates.copy(),
'interaction_type': self._classify_interaction(observation)
})
# Maintain memory size limits
for memory_type in self.memory:
if len(self.memory[memory_type]) > self.config['memory'][memory_type]['capacity']:
self.memory[memory_type].pop(0)
def _update_internal_state(self):
"""Update agent's internal state variables."""
# Update energy
self.energy -= self.config['physical']['energy']['consumption_rate']
if self.carrying is not None:
self.energy -= self.config['physical']['energy']['consumption_rate'] * 2
# Update task if needed
if self._should_switch_task():
self._switch_task()
# Update learning parameters
self.exploration_rate *= self.config['learning']['parameters']['decay_rate']
self.exploration_rate = max(
self.exploration_rate,
self.config['learning']['parameters']['min_exploration']
)
def _compute_reward(self, observation: Observation) -> float:
"""Compute reward signal from current observation."""
reward = 0.0
# Task-specific rewards
if self.current_task == TaskType.FORAGING:
reward += np.sum(observation.food) * self.config['active_inference']['preferences']['food_weight']
# Distance to nest reward
nest_distance = np.linalg.norm(observation.nest)
reward -= nest_distance * self.config['active_inference']['preferences']['home_weight']
# Safety reward (avoiding obstacles)
obstacle_penalty = np.sum(1.0 / (1.0 + np.linalg.norm(observation.obstacles, axis=1)))
reward -= obstacle_penalty * self.config['active_inference']['preferences']['safety_weight']
# Social reward
if np.any(observation.nestmates):
social_reward = self.config['active_inference']['preferences']['social_weight']
reward += social_reward
return reward
def _should_switch_task(self) -> bool:
"""Determine if agent should switch its current task."""
# Energy-based switching
if self.energy < self.config['physical']['energy']['critical_level']:
return True
# Random switching based on flexibility
if np.random.random() < self.config['behavior']['task_switching']['flexibility']:
return True
return False
def _switch_task(self):
"""Switch to a new task based on current conditions."""
# Get valid task options
valid_tasks = list(TaskType)
if self.current_task in valid_tasks:
valid_tasks.remove(self.current_task)
# Select new task (can be made more sophisticated)
self.current_task = np.random.choice(valid_tasks)
def _classify_interaction(self, observation: Observation) -> str:
"""Classify type of interaction with nearby nestmates."""
# Simple distance-based classification
distances = np.linalg.norm(observation.nestmates, axis=1)
if np.any(distances < 1.0):
return "direct"
elif np.any(distances < 3.0):
return "indirect"
return "none"

398
Things/Ant_Colony/colony.py Обычный файл
Просмотреть файл

@ -0,0 +1,398 @@
"""
Colony Management System
This module implements the colony management system that coordinates
the ant colony simulation, including agent management, task allocation,
and collective behavior.
"""
import numpy as np
from typing import Dict, List, Tuple, Optional
from dataclasses import dataclass
import networkx as nx
from agents.nestmate import Nestmate, TaskType
from environment.world import World, Position, Resource
@dataclass
class ColonyStats:
"""Container for colony statistics."""
population: int
task_distribution: Dict[TaskType, int]
resource_levels: Dict[str, float]
efficiency_metrics: Dict[str, float]
coordination_metrics: Dict[str, float]
class Colony:
"""
Colony management system coordinating multiple Nestmate agents.
"""
def __init__(self, config: dict, environment: World):
"""Initialize colony."""
self.config = config
self.environment = environment
# Initialize nest
self.nest_position = self._initialize_nest()
# Initialize agents
self.agents: List[Nestmate] = []
self._initialize_agents()
# Colony state
self.resources = {
'food': 0.0,
'water': 0.0,
'building_materials': 0.0
}
# Task management
self.task_needs = {task: 0.0 for task in TaskType}
self.task_allocation = {task: [] for task in TaskType}
# Social network
self.interaction_network = nx.Graph()
# Performance tracking
self.stats = ColonyStats(
population=len(self.agents),
task_distribution={task: 0 for task in TaskType},
resource_levels=self.resources.copy(),
efficiency_metrics={},
coordination_metrics={}
)
def _initialize_nest(self) -> Position:
"""Initialize nest position and structure."""
# Place nest in center by default
center_x = self.environment.size[0] / 2
center_y = self.environment.size[1] / 2
# Create nest chambers according to config
for chamber in self.config['nest']['structure']['chambers']:
chamber_pos = Position(
x=center_x + chamber['position'][0],
y=center_y + chamber['position'][1]
)
# Additional nest structure initialization can be added here
return Position(center_x, center_y)
def _initialize_agents(self):
"""Initialize colony agents."""
num_agents = self.config['population']['initial_size']
for _ in range(num_agents):
# Create agent with random position near nest
pos_x = self.nest_position.x + np.random.normal(0, 2.0)
pos_y = self.nest_position.y + np.random.normal(0, 2.0)
agent = Nestmate(self.config)
agent.position = np.array([pos_x, pos_y])
agent.orientation = np.random.uniform(0, 2*np.pi)
self.agents.append(agent)
self.interaction_network.add_node(agent)
# Initial task distribution
self._distribute_tasks()
def _distribute_tasks(self):
"""Distribute initial tasks among agents."""
distribution = self.config['population']['distribution']
# Calculate number of agents for each task
num_agents = len(self.agents)
task_counts = {
TaskType.FORAGING: int(distribution['foragers'] * num_agents),
TaskType.MAINTENANCE: int(distribution['maintainers'] * num_agents),
TaskType.NURSING: int(distribution['nurses'] * num_agents),
TaskType.DEFENSE: int(distribution['defenders'] * num_agents),
TaskType.EXPLORATION: int(distribution['explorers'] * num_agents)
}
# Assign tasks
agents_copy = self.agents.copy()
np.random.shuffle(agents_copy)
current_idx = 0
for task, count in task_counts.items():
for _ in range(count):
if current_idx < len(agents_copy):
agent = agents_copy[current_idx]
agent.current_task = task
self.task_allocation[task].append(agent)
current_idx += 1
def step(self, dt: float):
"""Update colony state."""
# Update task needs
self._update_task_needs()
# Update agent states and actions
self._update_agents(dt)
# Update social network
self._update_social_network()
# Update colony resources
self._update_resources()
# Update statistics
self._update_statistics()
# Check emergency conditions
self._check_emergencies()
def _update_task_needs(self):
"""Update colony's task needs based on current state."""
# Reset needs
for task in TaskType:
self.task_needs[task] = 0.0
# Calculate needs based on various factors
# Foraging need based on food levels
food_ratio = self.resources['food'] / self.config['nest']['resources']['food_capacity']
self.task_needs[TaskType.FORAGING] = 1.0 - food_ratio
# Maintenance need based on nest condition (placeholder)
self.task_needs[TaskType.MAINTENANCE] = 0.5 # Could be based on actual nest damage
# Nursing need (placeholder)
self.task_needs[TaskType.NURSING] = 0.3 # Could be based on brood size
# Defense need based on threats (placeholder)
self.task_needs[TaskType.DEFENSE] = 0.2 # Could be based on detected threats
# Exploration need
explored_area = len(set((agent.position[0], agent.position[1]) for agent in self.agents))
total_area = self.environment.size[0] * self.environment.size[1]
self.task_needs[TaskType.EXPLORATION] = 1.0 - (explored_area / total_area)
def _update_agents(self, dt: float):
"""Update all agents' states and actions."""
for agent in self.agents:
# Get environmental state at agent's position
pos = Position(agent.position[0], agent.position[1])
env_state = self.environment.get_state(pos)
# Get nearby agents
nearby_agents = self._get_nearby_agents(agent)
# Update agent observations
observations = self._prepare_observations(agent, env_state, nearby_agents)
# Get agent's action
action = agent.update(observations)
# Apply action
self._apply_action(agent, action, dt)
# Handle resource collection/deposition
self._handle_resource_interaction(agent)
def _get_nearby_agents(self, agent: Nestmate, radius: float = 5.0) -> List[Nestmate]:
"""Get list of agents within specified radius."""
nearby = []
for other in self.agents:
if other != agent:
distance = np.linalg.norm(other.position - agent.position)
if distance <= radius:
nearby.append(other)
return nearby
def _prepare_observations(self,
agent: Nestmate,
env_state: Dict,
nearby_agents: List[Nestmate]) -> Dict:
"""Prepare observation data for agent."""
# Convert nearby agents to observation format
nearby_positions = np.array([other.position for other in nearby_agents])
return {
'pheromone': np.array(list(env_state['pheromones'].values())),
'food': np.array([r.amount for r in env_state['resources'] if r.type == 'food']),
'nestmates': nearby_positions if len(nearby_positions) > 0 else np.zeros((0, 2)),
'obstacles': np.array([1.0 if env_state['terrain']['type'] == 'rock' else 0.0]),
'nest': agent.position - np.array([self.nest_position.x, self.nest_position.y])
}
def _apply_action(self, agent: Nestmate, action: np.ndarray, dt: float):
"""Apply agent's action and update its state."""
# Extract movement components
speed = np.clip(action[0], 0, agent.config['physical']['max_speed'])
turn_rate = np.clip(action[1], -agent.config['physical']['turn_rate'],
agent.config['physical']['turn_rate'])
# Update orientation
agent.orientation += turn_rate * dt
agent.orientation = agent.orientation % (2 * np.pi)
# Update position
direction = np.array([np.cos(agent.orientation), np.sin(agent.orientation)])
new_position = agent.position + speed * direction * dt
# Check for collision and apply if valid
if self._is_valid_position(new_position):
agent.position = new_position
def _is_valid_position(self, position: np.ndarray) -> bool:
"""Check if position is valid (within bounds and not in obstacle)."""
# Check bounds
if not (0 <= position[0] < self.environment.size[0] and
0 <= position[1] < self.environment.size[1]):
return False
# Check for obstacles
pos = Position(position[0], position[1])
env_state = self.environment.get_state(pos)
if env_state['terrain']['type'] == 'rock':
return False
return True
def _handle_resource_interaction(self, agent: Nestmate):
"""Handle agent's interaction with resources."""
# Check if agent is at nest
at_nest = np.linalg.norm(agent.position - np.array([self.nest_position.x, self.nest_position.y])) < 2.0
if at_nest and agent.carrying is not None:
# Deposit resource
self.resources[agent.carrying.type] += agent.carrying.amount
agent.carrying = None
elif agent.carrying is None:
# Try to pick up resource
pos = Position(agent.position[0], agent.position[1])
nearby_resources = self.environment._get_nearby_resources(pos, radius=1.0)
if nearby_resources:
resource = nearby_resources[0]
agent.carrying = resource
self.environment.resources.remove(resource)
def _update_social_network(self):
"""Update colony's social interaction network."""
# Clear old edges
self.interaction_network.clear_edges()
# Add edges for current interactions
for agent in self.agents:
nearby = self._get_nearby_agents(agent)
for other in nearby:
self.interaction_network.add_edge(agent, other)
def _update_resources(self):
"""Update colony's resource levels."""
# Apply consumption
num_agents = len(self.agents)
self.resources['food'] -= num_agents * self.config['physical']['energy']['consumption_rate']
self.resources['water'] -= num_agents * self.config['physical']['energy']['consumption_rate'] * 0.5
# Enforce bounds
for resource_type in self.resources:
self.resources[resource_type] = max(0.0, self.resources[resource_type])
def _update_statistics(self):
"""Update colony statistics."""
# Update basic stats
self.stats.population = len(self.agents)
# Update task distribution
for task in TaskType:
self.stats.task_distribution[task] = len([a for a in self.agents if a.current_task == task])
# Update resource levels
self.stats.resource_levels = self.resources.copy()
# Calculate efficiency metrics
self.stats.efficiency_metrics = {
'resource_gathering': self._calculate_resource_efficiency(),
'task_completion': self._calculate_task_efficiency(),
'energy_efficiency': self._calculate_energy_efficiency()
}
# Calculate coordination metrics
self.stats.coordination_metrics = {
'network_density': nx.density(self.interaction_network),
'clustering_coefficient': nx.average_clustering(self.interaction_network),
'task_specialization': self._calculate_specialization()
}
def _calculate_resource_efficiency(self) -> float:
"""Calculate resource gathering efficiency."""
if self.stats.task_distribution[TaskType.FORAGING] == 0:
return 0.0
return self.resources['food'] / self.stats.task_distribution[TaskType.FORAGING]
def _calculate_task_efficiency(self) -> float:
"""Calculate overall task completion efficiency."""
total_need = sum(self.task_needs.values())
if total_need == 0:
return 1.0
need_satisfaction = sum(min(1.0, len(self.task_allocation[task]) / (need * len(self.agents)))
for task, need in self.task_needs.items())
return need_satisfaction / len(self.task_needs)
def _calculate_energy_efficiency(self) -> float:
"""Calculate colony's energy efficiency."""
total_energy = sum(agent.energy for agent in self.agents)
return total_energy / (len(self.agents) * self.config['physical']['energy']['initial'])
def _calculate_specialization(self) -> float:
"""Calculate degree of task specialization."""
if not self.agents:
return 0.0
# Calculate how long agents stick to their tasks
total_switches = sum(1 for agent in self.agents
if len(agent.memory['temporal']) > 1
and agent.memory['temporal'][-1]['state'] != agent.memory['temporal'][-2]['state'])
return 1.0 - (total_switches / len(self.agents))
def _check_emergencies(self):
"""Check and respond to emergency conditions."""
# Check resource levels
for resource_type, amount in self.resources.items():
if amount < self.config['emergency']['resources']['critical_threshold']:
self._handle_resource_emergency(resource_type)
# Check for threats (placeholder)
if self._detect_threats():
self._handle_threat_emergency()
def _handle_resource_emergency(self, resource_type: str):
"""Handle critical resource shortage."""
if self.config['emergency']['resources']['emergency_allocation']:
# Reassign more agents to foraging
num_new_foragers = int(len(self.agents) * 0.2) # 20% of colony
current_foragers = set(self.task_allocation[TaskType.FORAGING])
for agent in self.agents:
if len(current_foragers) >= num_new_foragers:
break
if agent not in current_foragers and agent.current_task != TaskType.DEFENSE:
agent.current_task = TaskType.FORAGING
current_foragers.add(agent)
def _detect_threats(self) -> bool:
"""Detect potential threats to the colony."""
# Placeholder for threat detection
return False
def _handle_threat_emergency(self):
"""Handle detected threats."""
if self.config['emergency']['threats']['mobilization_rate'] > 0:
# Reassign agents to defense
num_defenders = int(len(self.agents) * self.config['emergency']['threats']['mobilization_rate'])
current_defenders = set(self.task_allocation[TaskType.DEFENSE])
for agent in self.agents:
if len(current_defenders) >= num_defenders:
break
if agent not in current_defenders:
agent.current_task = TaskType.DEFENSE
current_defenders.add(agent)

168
Things/Ant_Colony/config/agent_config.yaml Обычный файл
Просмотреть файл

@ -0,0 +1,168 @@
# Nestmate Agent Configuration
# Agent Parameters
agent:
name: "Nestmate"
version: "1.0.0"
description: "Active inference-based ant agent"
# Physical Parameters
physical:
max_speed: 1.0
turn_rate: 0.5
sensor_range: 5.0
carry_capacity: 1.0
energy:
initial: 1.0
consumption_rate: 0.001
recharge_rate: 0.005
critical_level: 0.2
# Sensory System
sensors:
pheromone:
types: ["food", "home", "alarm", "trail"]
detection_threshold: 0.01
gradient_sensitivity: 0.8
vision:
range: 5.0
angle: 120.0
resolution: 10
touch:
range: 0.5
sensitivity: 0.9
proprioception:
accuracy: 0.95
# Active Inference Parameters
active_inference:
# Generative Model
model:
hierarchical_levels: 3
state_dimensions: [10, 20, 30]
temporal_horizon: 5
precision_initial: 1.0
# Belief Updates
belief_update:
method: "variational"
learning_rate: 0.1
momentum: 0.9
precision_update_rate: 0.05
# Free Energy
free_energy:
temperature: 1.0
exploration_weight: 0.2
temporal_discount: 0.95
# Policy Selection
policy:
num_policies: 10
evaluation_horizon: 3
selection_temperature: 0.5
# Preferences
preferences:
food_weight: 1.0
home_weight: 0.8
safety_weight: 0.6
social_weight: 0.4
# Learning Parameters
learning:
enabled: true
type: "online"
parameters:
learning_rate: 0.1
exploration_rate: 0.2
decay_rate: 0.995
min_exploration: 0.05
# Experience Replay
experience_replay:
enabled: true
buffer_size: 1000
batch_size: 32
update_frequency: 10
# Social Learning
social_learning:
enabled: true
imitation_rate: 0.3
observation_range: 5.0
# Behavior Parameters
behavior:
# Task Switching
task_switching:
threshold: 0.7
cooldown: 50
flexibility: 0.5
# Pheromone Deposition
pheromone:
deposit_rate: 0.1
deposit_amount: 1.0
threshold: 0.3
# Movement
movement:
persistence: 0.7
alignment_weight: 0.3
cohesion_weight: 0.4
separation_weight: 0.5
# Memory Parameters
memory:
spatial:
capacity: 100
decay_rate: 0.01
temporal:
window_size: 10
compression_rate: 0.8
social:
capacity: 50
forget_rate: 0.05
# Communication
communication:
range: 3.0
bandwidth: 5
noise: 0.1
protocols:
- "location_sharing"
- "task_status"
- "danger_signal"
- "food_location"
# Adaptation Parameters
adaptation:
# Environmental
environmental:
learning_rate: 0.05
adaptation_threshold: 0.3
# Social
social:
conformity_bias: 0.4
innovation_rate: 0.2
# Task
task:
specialization_rate: 0.1
flexibility: 0.7
# Performance Metrics
metrics:
tracking:
- "energy_level"
- "task_success_rate"
- "exploration_efficiency"
- "social_integration"
- "learning_progress"
logging:
frequency: 100
detailed: true
save_path: "logs/agent/"

199
Things/Ant_Colony/config/colony_config.yaml Обычный файл
Просмотреть файл

@ -0,0 +1,199 @@
# Colony Configuration
# Colony Parameters
colony:
name: "Active Inference Colony"
version: "1.0.0"
description: "Multi-agent ant colony system using active inference"
# Population Parameters
population:
initial_size: 100
max_size: 500
min_size: 50
growth_rate: 0.01
mortality_rate: 0.005
# Agent Distribution
distribution:
foragers: 0.4
maintainers: 0.2
nurses: 0.2
defenders: 0.1
explorers: 0.1
# Nest Parameters
nest:
# Physical Structure
structure:
size: [50, 50]
chambers:
- name: "food_storage"
size: [10, 10]
position: [5, 5]
- name: "brood_chamber"
size: [15, 15]
position: [25, 25]
- name: "waste_chamber"
size: [8, 8]
position: [40, 40]
# Environmental Controls
environment:
temperature: 25.0
humidity: 0.7
ventilation: 0.5
# Resource Management
resources:
food_capacity: 1000
water_capacity: 500
building_materials: 200
# Collective Behavior
collective:
# Decision Making
decision_making:
consensus_threshold: 0.7
quorum_size: 0.3
decision_timeout: 100
# Task Allocation
task_allocation:
dynamic: true
response_threshold: 0.6
switch_cost: 0.2
# Information Sharing
information_sharing:
network_topology: "small_world"
connection_density: 0.3
communication_range: 5.0
# Pheromone System
pheromones:
# Types
types:
food:
evaporation_rate: 0.01
diffusion_rate: 0.1
deposit_amount: 1.0
home:
evaporation_rate: 0.005
diffusion_rate: 0.05
deposit_amount: 0.8
alarm:
evaporation_rate: 0.05
diffusion_rate: 0.2
deposit_amount: 2.0
trail:
evaporation_rate: 0.02
diffusion_rate: 0.15
deposit_amount: 1.2
# Grid Parameters
grid:
resolution: 0.5
update_frequency: 5
max_value: 10.0
# Learning Parameters
learning:
# Collective Learning
collective:
enabled: true
learning_rate: 0.05
memory_size: 1000
# Social Learning
social:
imitation_weight: 0.3
innovation_weight: 0.2
conformity_pressure: 0.4
# Cultural Evolution
cultural:
mutation_rate: 0.01
selection_pressure: 0.8
transmission_fidelity: 0.9
# Adaptation Parameters
adaptation:
# Environmental
environmental:
sensitivity: 0.7
response_time: 10
adaptation_rate: 0.05
# Population
population:
task_flexibility: 0.6
specialization_rate: 0.1
diversity_maintenance: 0.4
# Performance Metrics
metrics:
# Colony Level
colony:
- "population_size"
- "resource_levels"
- "task_efficiency"
- "survival_rate"
# Collective Behavior
collective:
- "coordination_index"
- "information_flow"
- "decision_accuracy"
- "adaptation_rate"
# Resource Management
resources:
- "food_collection_rate"
- "resource_distribution"
- "waste_management"
# Recording
recording:
frequency: 100
save_path: "logs/colony/"
detailed: true
# Visualization
visualization:
enabled: true
update_frequency: 10
# Display Options
display:
show_pheromones: true
show_agents: true
show_resources: true
show_network: true
# Analysis Plots
plots:
- "population_dynamics"
- "resource_levels"
- "task_distribution"
- "pheromone_maps"
# Export Settings
export:
format: ["png", "csv"]
frequency: 1000
path: "output/visualizations/"
# Emergency Protocols
emergency:
# Threat Response
threats:
detection_threshold: 0.7
response_time: 5
mobilization_rate: 0.8
# Resource Management
resources:
critical_threshold: 0.2
emergency_allocation: true
conservation_rate: 0.5

Просмотреть файл

@ -0,0 +1,246 @@
# Environment Configuration
# Environment Parameters
environment:
name: "Ant Colony Environment"
version: "1.0.0"
description: "Dynamic environment for ant colony simulation"
# World Parameters
world:
# Dimensions
size: [100, 100]
resolution: 0.5
wrap_around: false
# Physical Properties
physics:
timestep: 0.1
friction: 0.5
collision_detection: true
# Boundaries
boundaries:
type: "solid"
elasticity: 0.8
roughness: 0.3
# Terrain Parameters
terrain:
# Types
types:
- name: "soil"
friction: 0.6
deformability: 0.3
- name: "rock"
friction: 0.8
deformability: 0.1
- name: "sand"
friction: 0.4
deformability: 0.7
# Generation
generation:
method: "perlin_noise"
seed: 42
scale: 10.0
octaves: 4
# Features
features:
obstacles:
density: 0.1
min_size: [1, 1]
max_size: [5, 5]
gradients:
enabled: true
strength: 0.5
# Resource Parameters
resources:
# Food Sources
food:
types:
- name: "small_food"
size: 1.0
energy: 5.0
decay_rate: 0.001
- name: "medium_food"
size: 2.0
energy: 10.0
decay_rate: 0.002
- name: "large_food"
size: 3.0
energy: 20.0
decay_rate: 0.003
# Distribution
distribution:
method: "clustered"
cluster_size: 5
cluster_spread: 10.0
total_amount: 1000
# Dynamics
dynamics:
respawn_rate: 0.01
min_distance_to_nest: 10.0
max_distance_to_nest: 50.0
# Water Sources
water:
distribution:
method: "random"
total_amount: 500
dynamics:
evaporation_rate: 0.001
flow_rate: 0.1
# Environmental Conditions
conditions:
# Temperature
temperature:
base: 25.0
variation: 5.0
daily_cycle: true
cycle_period: 1000
# Humidity
humidity:
base: 0.7
variation: 0.2
daily_cycle: true
cycle_period: 1000
# Light
light:
base: 1.0
variation: 0.3
daily_cycle: true
cycle_period: 1000
# Hazards
hazards:
# Predators
predators:
enabled: true
types:
- name: "small_predator"
speed: 1.5
damage: 5.0
range: 10.0
- name: "large_predator"
speed: 1.0
damage: 10.0
range: 15.0
spawn_rate: 0.001
# Environmental
environmental:
flood_probability: 0.001
drought_probability: 0.001
disease_probability: 0.0005
# Pheromone Grid
pheromone_grid:
resolution: 0.5
layers:
- name: "food"
diffusion_rate: 0.1
evaporation_rate: 0.01
- name: "home"
diffusion_rate: 0.05
evaporation_rate: 0.005
- name: "alarm"
diffusion_rate: 0.2
evaporation_rate: 0.05
- name: "trail"
diffusion_rate: 0.15
evaporation_rate: 0.02
# Dynamics
dynamics:
update_frequency: 5
max_value: 10.0
min_value: 0.01
# Spatial Analysis
spatial:
# Grid Analysis
grid:
enabled: true
resolution: 2.0
metrics:
- "resource_density"
- "agent_density"
- "pheromone_concentration"
# Regions
regions:
automatic_detection: true
min_region_size: 10
max_regions: 10
# Time Parameters
time:
# Cycles
cycles:
day_length: 1000
season_length: 10000
year_length: 40000
# Events
events:
random_seed: 42
min_interval: 100
max_interval: 1000
# Performance Settings
performance:
# Optimization
optimization:
spatial_hashing: true
grid_size: 5.0
max_entities_per_cell: 10
# Update Frequencies
update_freq:
physics: 1
pheromones: 5
resources: 10
hazards: 20
# Limits
limits:
max_entities: 1000
max_pheromone_updates: 1000
max_collision_checks: 1000
# Visualization Settings
visualization:
# Layers
layers:
terrain: true
resources: true
pheromones: true
agents: true
hazards: true
# Colors
colors:
terrain:
soil: [139, 69, 19]
rock: [128, 128, 128]
sand: [194, 178, 128]
pheromones:
food: [0, 255, 0]
home: [255, 0, 0]
alarm: [255, 255, 0]
trail: [0, 0, 255]
# Display
display:
grid_lines: false
coordinates: true
scale_bar: true
legend: true

Просмотреть файл

@ -0,0 +1,241 @@
# Simulation Configuration
# Simulation Parameters
simulation:
name: "Ant Colony Simulation"
version: "1.0.0"
description: "Multi-agent ant colony simulation using active inference"
# Runtime Parameters
runtime:
# Time Settings
time:
max_steps: 100000
timestep: 0.1
real_time_factor: 1.0
# Execution
execution:
num_threads: 4
gpu_enabled: false
seed: 42
deterministic: true
# Initialization
initialization:
# World Setup
world:
random_seed: 42
generate_terrain: true
place_resources: true
# Colony Setup
colony:
random_seed: 43
place_nest: true
distribute_agents: true
# Physics Engine
physics:
# Engine Settings
engine:
type: "2D"
collision_detection: true
spatial_hash_size: 5.0
# Parameters
parameters:
gravity: [0, 0]
friction: 0.5
restitution: 0.5
# Constraints
constraints:
velocity_cap: 10.0
force_cap: 100.0
acceleration_cap: 20.0
# Integration
integration:
# Methods
method: "RK4"
substeps: 2
# Error Control
error_tolerance: 1e-6
max_iterations: 100
# Stability
stability_checks: true
energy_conservation: true
# Active Inference Parameters
active_inference:
# Global Parameters
global:
temperature: 1.0
learning_rate: 0.1
exploration_rate: 0.2
# Hierarchical Settings
hierarchical:
levels: 3
top_down_weight: 0.7
bottom_up_weight: 0.3
# Precision Settings
precision:
initial: 1.0
learning_enabled: true
adaptation_rate: 0.05
# Multi-Agent System
multi_agent:
# Coordination
coordination:
enabled: true
method: "decentralized"
communication_range: 5.0
# Synchronization
synchronization:
enabled: true
update_frequency: 10
sync_tolerance: 0.1
# Load Balancing
load_balancing:
enabled: true
method: "dynamic"
threshold: 0.8
# Analysis Settings
analysis:
# Data Collection
data_collection:
enabled: true
frequency: 100
detailed_logging: true
# Metrics
metrics:
agent_level:
- "position"
- "velocity"
- "energy"
- "beliefs"
colony_level:
- "population"
- "resources"
- "efficiency"
- "coordination"
environment_level:
- "resource_distribution"
- "pheromone_maps"
- "agent_density"
# Statistics
statistics:
compute_mean: true
compute_variance: true
compute_correlations: true
temporal_analysis: true
# Visualization
visualization:
# Real-time Display
realtime:
enabled: true
update_frequency: 10
quality: "medium"
# Recording
recording:
enabled: true
format: "mp4"
framerate: 30
resolution: [1920, 1080]
# Features
features:
show_agents: true
show_pheromones: true
show_resources: true
show_stats: true
# UI Elements
ui:
show_controls: true
show_plots: true
show_metrics: true
interactive: true
# Data Management
data:
# Storage
storage:
format: "hdf5"
compression: true
backup_frequency: 1000
# Export
export:
enabled: true
format: ["csv", "json"]
frequency: 1000
# Checkpointing
checkpointing:
enabled: true
frequency: 5000
keep_last: 5
# Analysis Output
analysis:
save_plots: true
save_metrics: true
save_trajectories: true
output_format: ["png", "pdf"]
# Performance Monitoring
performance:
# Monitoring
monitoring:
enabled: true
frequency: 100
# Profiling
profiling:
enabled: true
detailed: true
# Optimization
optimization:
auto_tune: true
target_fps: 30
# Resource Usage
resources:
max_memory: "4GB"
max_cpu_percent: 80
gpu_memory_limit: "2GB"
# Debug Settings
debug:
# Logging
logging:
level: "INFO"
file: "logs/simulation.log"
console_output: true
# Validation
validation:
check_constraints: true
verify_physics: true
test_consistency: true
# Development
development:
assertions_enabled: true
extra_checks: true
profile_code: true

367
Things/Ant_Colony/environment/world.py Обычный файл
Просмотреть файл

@ -0,0 +1,367 @@
"""
World Environment Implementation
This module implements the environment for the ant colony simulation,
including terrain generation, resource management, and physics.
"""
import numpy as np
from typing import Dict, List, Tuple, Optional
from dataclasses import dataclass
import noise # For Perlin noise terrain generation
from scipy.ndimage import gaussian_filter
@dataclass
class Position:
"""2D position with optional orientation."""
x: float
y: float
theta: float = 0.0
@dataclass
class Resource:
"""Resource entity in the environment."""
position: Position
type: str
amount: float
energy: float
decay_rate: float
class TerrainGenerator:
"""Generates and manages terrain features."""
def __init__(self, config: dict):
"""Initialize terrain generator."""
self.config = config
self.size = config['world']['size']
self.resolution = config['world']['resolution']
# Initialize terrain grid
self.height_map = np.zeros(self.size)
self.friction_map = np.zeros(self.size)
self.type_map = np.zeros(self.size, dtype=str)
# Generate initial terrain
self._generate_terrain()
def _generate_terrain(self):
"""Generate terrain using Perlin noise."""
# Generate base height map
scale = self.config['terrain']['generation']['scale']
octaves = self.config['terrain']['generation']['octaves']
seed = self.config['terrain']['generation']['seed']
for i in range(self.size[0]):
for j in range(self.size[1]):
self.height_map[i, j] = noise.pnoise2(
i/scale,
j/scale,
octaves=octaves,
persistence=0.5,
lacunarity=2.0,
base=seed
)
# Normalize height map
self.height_map = (self.height_map - self.height_map.min()) / (self.height_map.max() - self.height_map.min())
# Generate terrain types and friction based on height
for i in range(self.size[0]):
for j in range(self.size[1]):
height = self.height_map[i, j]
if height < 0.3:
self.type_map[i, j] = "sand"
self.friction_map[i, j] = self.config['terrain']['types'][2]['friction']
elif height < 0.7:
self.type_map[i, j] = "soil"
self.friction_map[i, j] = self.config['terrain']['types'][0]['friction']
else:
self.type_map[i, j] = "rock"
self.friction_map[i, j] = self.config['terrain']['types'][1]['friction']
# Add obstacles
self._add_obstacles()
def _add_obstacles(self):
"""Add obstacles to the terrain."""
density = self.config['terrain']['features']['obstacles']['density']
min_size = self.config['terrain']['features']['obstacles']['min_size']
max_size = self.config['terrain']['features']['obstacles']['max_size']
num_obstacles = int(density * self.size[0] * self.size[1])
for _ in range(num_obstacles):
# Random position and size
pos_x = np.random.randint(0, self.size[0])
pos_y = np.random.randint(0, self.size[1])
size_x = np.random.randint(min_size[0], max_size[0] + 1)
size_y = np.random.randint(min_size[1], max_size[1] + 1)
# Add obstacle
x_range = slice(max(0, pos_x), min(self.size[0], pos_x + size_x))
y_range = slice(max(0, pos_y), min(self.size[1], pos_y + size_y))
self.height_map[x_range, y_range] = 1.0
self.type_map[x_range, y_range] = "rock"
self.friction_map[x_range, y_range] = self.config['terrain']['types'][1]['friction']
class PheromoneGrid:
"""Manages pheromone diffusion and evaporation."""
def __init__(self, config: dict):
"""Initialize pheromone grid."""
self.config = config
self.size = config['world']['size']
self.resolution = config['pheromone_grid']['resolution']
# Initialize pheromone layers
self.layers = {}
for layer in config['pheromone_grid']['layers']:
self.layers[layer['name']] = {
'grid': np.zeros(self.size),
'diffusion_rate': layer['diffusion_rate'],
'evaporation_rate': layer['evaporation_rate']
}
def update(self, dt: float):
"""Update pheromone concentrations."""
for layer_name, layer in self.layers.items():
# Diffusion
layer['grid'] = gaussian_filter(
layer['grid'],
sigma=layer['diffusion_rate'] * dt
)
# Evaporation
layer['grid'] *= (1 - layer['evaporation_rate'] * dt)
# Enforce bounds
layer['grid'] = np.clip(
layer['grid'],
self.config['pheromone_grid']['dynamics']['min_value'],
self.config['pheromone_grid']['dynamics']['max_value']
)
def deposit(self, position: Position, pheromone_type: str, amount: float):
"""Deposit pheromone at specified position."""
if pheromone_type not in self.layers:
return
# Convert position to grid coordinates
x = int(position.x / self.resolution)
y = int(position.y / self.resolution)
if 0 <= x < self.size[0] and 0 <= y < self.size[1]:
self.layers[pheromone_type]['grid'][x, y] += amount
def get_concentration(self, position: Position, pheromone_type: str) -> float:
"""Get pheromone concentration at specified position."""
if pheromone_type not in self.layers:
return 0.0
x = int(position.x / self.resolution)
y = int(position.y / self.resolution)
if 0 <= x < self.size[0] and 0 <= y < self.size[1]:
return self.layers[pheromone_type]['grid'][x, y]
return 0.0
class World:
"""Main environment class managing all environmental components."""
def __init__(self, config: dict):
"""Initialize world environment."""
self.config = config
self.size = config['world']['size']
self.resolution = config['world']['resolution']
# Initialize components
self.terrain = TerrainGenerator(config)
self.pheromones = PheromoneGrid(config)
# Resource management
self.resources: List[Resource] = []
self._initialize_resources()
# Time tracking
self.time = 0.0
self.day_time = 0.0
# Environmental conditions
self.temperature = config['conditions']['temperature']['base']
self.humidity = config['conditions']['humidity']['base']
self.light = config['conditions']['light']['base']
def step(self, dt: float):
"""Update world state."""
# Update time
self.time += dt
self.day_time = (self.day_time + dt) % self.config['time']['cycles']['day_length']
# Update environmental conditions
self._update_conditions()
# Update resources
self._update_resources(dt)
# Update pheromones
self.pheromones.update(dt)
def _initialize_resources(self):
"""Initialize resource distribution."""
# Food resources
food_config = self.config['resources']['food']
if food_config['distribution']['method'] == "clustered":
self._create_resource_clusters(
food_config['distribution']['cluster_size'],
food_config['distribution']['total_amount']
)
else:
self._create_random_resources(
food_config['distribution']['total_amount']
)
# Water resources
water_config = self.config['resources']['water']
self._create_random_resources(
water_config['distribution']['total_amount'],
resource_type="water"
)
def _create_resource_clusters(self, cluster_size: int, total_amount: float):
"""Create clustered resource distribution."""
num_clusters = int(total_amount / cluster_size)
for _ in range(num_clusters):
# Random cluster center
center_x = np.random.uniform(0, self.size[0])
center_y = np.random.uniform(0, self.size[1])
# Create resources in cluster
for _ in range(cluster_size):
# Random offset from center
offset_x = np.random.normal(0, self.config['resources']['food']['distribution']['cluster_spread'])
offset_y = np.random.normal(0, self.config['resources']['food']['distribution']['cluster_spread'])
x = np.clip(center_x + offset_x, 0, self.size[0] - 1)
y = np.clip(center_y + offset_y, 0, self.size[1] - 1)
# Create resource
resource = Resource(
position=Position(x, y),
type="food",
amount=np.random.uniform(1.0, 3.0),
energy=10.0,
decay_rate=0.001
)
self.resources.append(resource)
def _create_random_resources(self, total_amount: float, resource_type: str = "food"):
"""Create randomly distributed resources."""
num_resources = int(total_amount)
for _ in range(num_resources):
x = np.random.uniform(0, self.size[0])
y = np.random.uniform(0, self.size[1])
resource = Resource(
position=Position(x, y),
type=resource_type,
amount=np.random.uniform(1.0, 3.0),
energy=5.0 if resource_type == "water" else 10.0,
decay_rate=0.001
)
self.resources.append(resource)
def _update_resources(self, dt: float):
"""Update resource states."""
# Update existing resources
for resource in self.resources[:]:
resource.amount -= resource.decay_rate * dt
if resource.amount <= 0:
self.resources.remove(resource)
# Respawn resources if needed
if len([r for r in self.resources if r.type == "food"]) < self.config['resources']['food']['distribution']['total_amount'] * 0.5:
self._create_random_resources(10)
def _update_conditions(self):
"""Update environmental conditions."""
# Day/night cycle
day_progress = self.day_time / self.config['time']['cycles']['day_length']
# Temperature variation
self.temperature = self.config['conditions']['temperature']['base'] + \
self.config['conditions']['temperature']['variation'] * \
np.sin(2 * np.pi * day_progress)
# Humidity variation
self.humidity = self.config['conditions']['humidity']['base'] + \
self.config['conditions']['humidity']['variation'] * \
np.sin(2 * np.pi * day_progress + np.pi/2)
# Light variation
self.light = self.config['conditions']['light']['base'] + \
self.config['conditions']['light']['variation'] * \
np.sin(2 * np.pi * day_progress)
def get_state(self, position: Position) -> Dict:
"""Get environmental state at specified position."""
return {
'terrain': {
'height': self._get_height(position),
'type': self._get_terrain_type(position),
'friction': self._get_friction(position)
},
'pheromones': {
name: self.pheromones.get_concentration(position, name)
for name in self.pheromones.layers.keys()
},
'resources': self._get_nearby_resources(position),
'conditions': {
'temperature': self.temperature,
'humidity': self.humidity,
'light': self.light
}
}
def _get_height(self, position: Position) -> float:
"""Get terrain height at position."""
x = int(position.x / self.resolution)
y = int(position.y / self.resolution)
if 0 <= x < self.size[0] and 0 <= y < self.size[1]:
return self.terrain.height_map[x, y]
return 0.0
def _get_terrain_type(self, position: Position) -> str:
"""Get terrain type at position."""
x = int(position.x / self.resolution)
y = int(position.y / self.resolution)
if 0 <= x < self.size[0] and 0 <= y < self.size[1]:
return self.terrain.type_map[x, y]
return "none"
def _get_friction(self, position: Position) -> float:
"""Get terrain friction at position."""
x = int(position.x / self.resolution)
y = int(position.y / self.resolution)
if 0 <= x < self.size[0] and 0 <= y < self.size[1]:
return self.terrain.friction_map[x, y]
return 1.0
def _get_nearby_resources(self, position: Position, radius: float = 5.0) -> List[Resource]:
"""Get resources within specified radius of position."""
nearby = []
for resource in self.resources:
dx = resource.position.x - position.x
dy = resource.position.y - position.y
distance = np.sqrt(dx*dx + dy*dy)
if distance <= radius:
nearby.append(resource)
return nearby

15
Things/Ant_Colony/requirements.txt Обычный файл
Просмотреть файл

@ -0,0 +1,15 @@
numpy>=1.21.0
scipy>=1.7.0
matplotlib>=3.4.0
seaborn>=0.11.0
torch>=1.9.0
networkx>=2.6.0
pyyaml>=5.4.0
h5py>=3.3.0
pandas>=1.3.0
noise>=1.2.2 # For Perlin noise generation
tqdm>=4.61.0 # For progress bars
pytest>=6.2.0 # For testing
black>=21.6b0 # For code formatting
mypy>=0.910 # For type checking
pylint>=2.9.0 # For code analysis

394
Things/Ant_Colony/simulation.py Обычный файл
Просмотреть файл

@ -0,0 +1,394 @@
"""
Simulation Runner
This module implements the simulation runner that manages the ant colony simulation,
including visualization, data collection, and analysis.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import seaborn as sns
from typing import Dict, List, Tuple, Optional
import yaml
import h5py
import time
import logging
from pathlib import Path
import pandas as pd
from environment.world import World
from colony import Colony
from agents.nestmate import TaskType
class Simulation:
"""
Manages the ant colony simulation, including visualization and data collection.
"""
def __init__(self, config_path: str):
"""Initialize simulation."""
# Load configuration
self.config = self._load_config(config_path)
# Set up logging
self._setup_logging()
# Initialize components
self.environment = World(self.config)
self.colony = Colony(self.config, self.environment)
# Visualization setup
if self.config['visualization']['realtime']['enabled']:
self._setup_visualization()
# Data collection setup
self.data = {
'time': [],
'population': [],
'resources': [],
'task_distribution': [],
'efficiency_metrics': [],
'coordination_metrics': []
}
# Performance tracking
self.performance_metrics = {
'step_times': [],
'memory_usage': [],
'fps': []
}
# Simulation state
self.current_step = 0
self.running = False
self.paused = False
def _load_config(self, config_path: str) -> dict:
"""Load configuration from YAML file."""
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
return config
def _setup_logging(self):
"""Set up logging configuration."""
logging.basicConfig(
level=self.config['debug']['logging']['level'],
format=self.config['debug']['logging']['format'],
filename=self.config['debug']['logging']['file']
)
self.logger = logging.getLogger(__name__)
def _setup_visualization(self):
"""Set up visualization components."""
plt.style.use('dark_background')
# Create figure and subplots
self.fig = plt.figure(figsize=self.config['visualization']['plots']['figure_size'])
# Main simulation view
self.ax_main = self.fig.add_subplot(221)
self.ax_main.set_title('Colony Simulation')
# Resource levels
self.ax_resources = self.fig.add_subplot(222)
self.ax_resources.set_title('Resource Levels')
# Task distribution
self.ax_tasks = self.fig.add_subplot(223)
self.ax_tasks.set_title('Task Distribution')
# Efficiency metrics
self.ax_metrics = self.fig.add_subplot(224)
self.ax_metrics.set_title('Performance Metrics')
plt.tight_layout()
def run(self, num_steps: Optional[int] = None):
"""Run simulation for specified number of steps."""
self.running = True
start_time = time.time()
max_steps = num_steps if num_steps is not None else self.config['runtime']['time']['max_steps']
try:
while self.running and self.current_step < max_steps:
if not self.paused:
self.step()
# Update visualization
if self.config['visualization']['realtime']['enabled'] and \
self.current_step % self.config['visualization']['realtime']['update_frequency'] == 0:
self._update_visualization()
# Save data
if self.current_step % self.config['data']['export']['frequency'] == 0:
self._save_data()
# Performance monitoring
if self.config['performance']['monitoring']['enabled'] and \
self.current_step % self.config['performance']['monitoring']['frequency'] == 0:
self._monitor_performance()
except KeyboardInterrupt:
self.logger.info("Simulation interrupted by user")
finally:
self._cleanup()
end_time = time.time()
self.logger.info(f"Simulation completed in {end_time - start_time:.2f} seconds")
def step(self):
"""Execute one simulation step."""
step_start = time.time()
# Update environment
self.environment.step(self.config['runtime']['time']['timestep'])
# Update colony
self.colony.step(self.config['runtime']['time']['timestep'])
# Collect data
self._collect_data()
# Update step counter
self.current_step += 1
# Log step time
step_time = time.time() - step_start
self.performance_metrics['step_times'].append(step_time)
def _collect_data(self):
"""Collect simulation data."""
# Basic metrics
self.data['time'].append(self.current_step * self.config['runtime']['time']['timestep'])
self.data['population'].append(self.colony.stats.population)
# Resources
self.data['resources'].append(self.colony.stats.resource_levels.copy())
# Task distribution
self.data['task_distribution'].append(self.colony.stats.task_distribution.copy())
# Efficiency metrics
self.data['efficiency_metrics'].append(self.colony.stats.efficiency_metrics.copy())
# Coordination metrics
self.data['coordination_metrics'].append(self.colony.stats.coordination_metrics.copy())
def _update_visualization(self):
"""Update visualization plots."""
# Clear all axes
for ax in [self.ax_main, self.ax_resources, self.ax_tasks, self.ax_metrics]:
ax.clear()
# Main simulation view
self._plot_simulation_state()
# Resource levels
self._plot_resource_levels()
# Task distribution
self._plot_task_distribution()
# Efficiency metrics
self._plot_efficiency_metrics()
plt.draw()
plt.pause(0.01)
def _plot_simulation_state(self):
"""Plot current simulation state."""
# Plot terrain
terrain = self.environment.terrain.height_map
self.ax_main.imshow(terrain, cmap='terrain')
# Plot agents
agent_positions = np.array([agent.position for agent in self.colony.agents])
agent_colors = [self._get_task_color(agent.current_task) for agent in self.colony.agents]
if len(agent_positions) > 0:
self.ax_main.scatter(agent_positions[:, 0], agent_positions[:, 1],
c=agent_colors, alpha=0.6)
# Plot resources
resource_positions = np.array([[r.position.x, r.position.y] for r in self.environment.resources])
if len(resource_positions) > 0:
self.ax_main.scatter(resource_positions[:, 0], resource_positions[:, 1],
c='yellow', marker='*')
# Plot nest
self.ax_main.scatter([self.colony.nest_position.x], [self.colony.nest_position.y],
c='white', marker='s', s=100)
self.ax_main.set_title(f'Step {self.current_step}')
def _plot_resource_levels(self):
"""Plot resource level history."""
if len(self.data['time']) > 0:
for resource_type in self.colony.resources.keys():
values = [d[resource_type] for d in self.data['resources']]
self.ax_resources.plot(self.data['time'], values, label=resource_type)
self.ax_resources.legend()
self.ax_resources.set_xlabel('Time')
self.ax_resources.set_ylabel('Amount')
def _plot_task_distribution(self):
"""Plot task distribution."""
if len(self.data['task_distribution']) > 0:
latest_dist = self.data['task_distribution'][-1]
tasks = list(latest_dist.keys())
counts = [latest_dist[task] for task in tasks]
colors = [self._get_task_color(task) for task in tasks]
self.ax_tasks.bar(range(len(tasks)), counts, color=colors)
self.ax_tasks.set_xticks(range(len(tasks)))
self.ax_tasks.set_xticklabels([task.value for task in tasks], rotation=45)
def _plot_efficiency_metrics(self):
"""Plot efficiency metrics."""
if len(self.data['efficiency_metrics']) > 0:
latest_metrics = self.data['efficiency_metrics'][-1]
metrics = list(latest_metrics.keys())
values = [latest_metrics[metric] for metric in metrics]
self.ax_metrics.bar(range(len(metrics)), values)
self.ax_metrics.set_xticks(range(len(metrics)))
self.ax_metrics.set_xticklabels(metrics, rotation=45)
self.ax_metrics.set_ylim(0, 1)
def _get_task_color(self, task: TaskType) -> str:
"""Get color for visualization based on task type."""
color_map = {
TaskType.FORAGING: 'green',
TaskType.MAINTENANCE: 'blue',
TaskType.NURSING: 'purple',
TaskType.DEFENSE: 'red',
TaskType.EXPLORATION: 'orange'
}
return color_map.get(task, 'gray')
def _save_data(self):
"""Save simulation data to file."""
if not self.config['data']['export']['enabled']:
return
# Create output directory if it doesn't exist
output_dir = Path(self.config['data']['storage']['path'])
output_dir.mkdir(parents=True, exist_ok=True)
# Save to HDF5
if 'hdf5' in self.config['data']['storage']['format']:
self._save_to_hdf5(output_dir / f'simulation_data_{self.current_step}.h5')
# Save to CSV
if 'csv' in self.config['data']['export']['format']:
self._save_to_csv(output_dir)
def _save_to_hdf5(self, filepath: Path):
"""Save data to HDF5 format."""
with h5py.File(filepath, 'w') as f:
# Create groups
sim_group = f.create_group('simulation')
colony_group = f.create_group('colony')
perf_group = f.create_group('performance')
# Save simulation data
sim_group.create_dataset('time', data=np.array(self.data['time']))
sim_group.create_dataset('step', data=self.current_step)
# Save colony data
colony_group.create_dataset('population', data=np.array(self.data['population']))
# Create datasets for dictionary data
for key in ['resources', 'task_distribution', 'efficiency_metrics', 'coordination_metrics']:
if len(self.data[key]) > 0:
group = colony_group.create_group(key)
for metric_key in self.data[key][0].keys():
values = [d[metric_key] for d in self.data[key]]
group.create_dataset(metric_key, data=np.array(values))
# Save performance data
perf_group.create_dataset('step_times', data=np.array(self.performance_metrics['step_times']))
def _save_to_csv(self, output_dir: Path):
"""Save data to CSV format."""
# Save basic metrics
pd.DataFrame({
'time': self.data['time'],
'population': self.data['population']
}).to_csv(output_dir / 'basic_metrics.csv', index=False)
# Save resource data
if len(self.data['resources']) > 0:
pd.DataFrame(self.data['resources']).to_csv(
output_dir / 'resources.csv', index=False
)
# Save task distribution
if len(self.data['task_distribution']) > 0:
pd.DataFrame(self.data['task_distribution']).to_csv(
output_dir / 'task_distribution.csv', index=False
)
# Save efficiency metrics
if len(self.data['efficiency_metrics']) > 0:
pd.DataFrame(self.data['efficiency_metrics']).to_csv(
output_dir / 'efficiency_metrics.csv', index=False
)
def _monitor_performance(self):
"""Monitor and log performance metrics."""
if len(self.performance_metrics['step_times']) > 0:
avg_step_time = np.mean(self.performance_metrics['step_times'][-100:])
current_fps = 1.0 / avg_step_time
self.performance_metrics['fps'].append(current_fps)
self.logger.info(f"Current FPS: {current_fps:.2f}")
# Check performance against targets
if current_fps < self.config['performance']['optimization']['target_fps']:
self._optimize_performance()
def _optimize_performance(self):
"""Attempt to optimize simulation performance."""
if not self.config['performance']['optimization']['auto_tune']:
return
# Example optimization: Reduce visualization frequency
if self.config['visualization']['realtime']['enabled']:
current_freq = self.config['visualization']['realtime']['update_frequency']
self.config['visualization']['realtime']['update_frequency'] = min(current_freq * 2, 100)
self.logger.info("Adjusted visualization frequency for performance optimization")
def _cleanup(self):
"""Cleanup resources and save final state."""
# Save final data
self._save_data()
# Close visualization
if self.config['visualization']['realtime']['enabled']:
plt.close(self.fig)
# Log final statistics
self.logger.info(f"Simulation completed after {self.current_step} steps")
self.logger.info(f"Final population: {self.colony.stats.population}")
self.logger.info(f"Final resource levels: {self.colony.stats.resource_levels}")
def pause(self):
"""Pause the simulation."""
self.paused = True
self.logger.info("Simulation paused")
def resume(self):
"""Resume the simulation."""
self.paused = False
self.logger.info("Simulation resumed")
def stop(self):
"""Stop the simulation."""
self.running = False
self.logger.info("Simulation stopped")

Двоичные данные
Things/Generic_POMDP/.hypothesis/unicode_data/14.0.0/charmap.json.gz Обычный файл

Двоичный файл не отображается.

Просмотреть файл

@ -1,396 +0,0 @@
2025-02-06 14:42:15,529 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:15,529 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:15,529 - test_generic_pomdp - INFO - Starting test session
2025-02-06 14:42:15,529 - test_generic_pomdp - INFO - Starting test session
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - Log file: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/logs/test_run_20250206_144215.log
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - Log file: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/logs/test_run_20250206_144215.log
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:15,530 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:15,538 - test_generic_pomdp - INFO - Testing configuration dimensions:
2025-02-06 14:42:15,538 - test_generic_pomdp - INFO - Testing configuration dimensions:
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - All configuration dimensions validated successfully
2025-02-06 14:42:15,539 - test_generic_pomdp - INFO - All configuration dimensions validated successfully
2025-02-06 14:42:15,549 - test_generic_pomdp - INFO - Checking model dimensions match configuration:
2025-02-06 14:42:15,549 - test_generic_pomdp - INFO - Checking model dimensions match configuration:
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ Observations dimension: 4
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ Observations dimension: 4
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ States dimension: 3
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ States dimension: 3
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ Actions dimension: 2
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ Actions dimension: 2
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
Validating matrix shapes:
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
Validating matrix shapes:
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ A matrix shape: (4, 3)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ A matrix shape: (4, 3)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ B matrix shape: (3, 3, 2)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ B matrix shape: (3, 3, 2)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ C matrix shape: (4, 4)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ C matrix shape: (4, 4)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ D matrix shape: (3,)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ D matrix shape: (3,)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ E matrix shape: (2,)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO - ✓ E matrix shape: (2,)
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
✓ Initial state validated successfully
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
✓ Initial state validated successfully
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
All initialization checks passed successfully
2025-02-06 14:42:15,550 - test_generic_pomdp - INFO -
All initialization checks passed successfully
2025-02-06 14:42:16,686 - test_generic_pomdp - INFO - Observation counts: [50]
2025-02-06 14:42:16,686 - test_generic_pomdp - INFO - Observation counts: [50]
2025-02-06 14:42:16,698 - test_generic_pomdp - INFO -
Starting full simulation test
2025-02-06 14:42:16,698 - test_generic_pomdp - INFO -
Starting full simulation test
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO -
Output directories:
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO -
Output directories:
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Test results: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/test_results/full_simulation
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Test results: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/test_results/full_simulation
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Plots: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/plots
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Plots: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/plots
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - EFE Components: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/plots/efe_components
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - EFE Components: /home/trim/Documents/Obsidian/Cognitive_Modeling/Things/Generic_POMDP/Output/plots/efe_components
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO -
Model Configuration:
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO -
Model Configuration:
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:16,699 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:20,632 - test_generic_pomdp - INFO -
Running simulation for 10 steps
2025-02-06 14:42:20,632 - test_generic_pomdp - INFO -
Running simulation for 10 steps
2025-02-06 14:42:20,633 - test_generic_pomdp - INFO -
Step 1/10
2025-02-06 14:42:20,633 - test_generic_pomdp - INFO -
Step 1/10
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Free Energy: 0.1293
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Free Energy: 0.1293
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Updated beliefs: [8.46940190e-01 9.99325545e-17 1.53059810e-01]
2025-02-06 14:42:20,640 - test_generic_pomdp - INFO - Updated beliefs: [8.46940190e-01 9.99325545e-17 1.53059810e-01]
2025-02-06 14:42:22,643 - test_generic_pomdp - INFO -
Step 2/10
2025-02-06 14:42:22,643 - test_generic_pomdp - INFO -
Step 2/10
2025-02-06 14:42:22,649 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:22,649 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:22,649 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:22,649 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:22,650 - test_generic_pomdp - INFO - Updated beliefs: [9.90534545e-17 9.90534545e-17 1.00000000e+00]
2025-02-06 14:42:22,650 - test_generic_pomdp - INFO - Updated beliefs: [9.90534545e-17 9.90534545e-17 1.00000000e+00]
2025-02-06 14:42:24,690 - test_generic_pomdp - INFO -
Step 3/10
2025-02-06 14:42:24,690 - test_generic_pomdp - INFO -
Step 3/10
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Updated beliefs: [9.97670637e-17 9.97670637e-17 1.00000000e+00]
2025-02-06 14:42:24,696 - test_generic_pomdp - INFO - Updated beliefs: [9.97670637e-17 9.97670637e-17 1.00000000e+00]
2025-02-06 14:42:26,776 - test_generic_pomdp - INFO -
Step 4/10
2025-02-06 14:42:26,776 - test_generic_pomdp - INFO -
Step 4/10
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Free Energy: 1.1020
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Free Energy: 1.1020
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Updated beliefs: [9.06753435e-17 1.00000000e+00 9.06753435e-17]
2025-02-06 14:42:26,782 - test_generic_pomdp - INFO - Updated beliefs: [9.06753435e-17 1.00000000e+00 9.06753435e-17]
2025-02-06 14:42:28,718 - test_generic_pomdp - INFO -
Step 5/10
2025-02-06 14:42:28,718 - test_generic_pomdp - INFO -
Step 5/10
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Free Energy: 1.0299
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Free Energy: 1.0299
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Updated beliefs: [0.37367767 0.51630096 0.11002137]
2025-02-06 14:42:28,725 - test_generic_pomdp - INFO - Updated beliefs: [0.37367767 0.51630096 0.11002137]
2025-02-06 14:42:30,777 - test_generic_pomdp - INFO -
Step 6/10
2025-02-06 14:42:30,777 - test_generic_pomdp - INFO -
Step 6/10
2025-02-06 14:42:30,783 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:30,783 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:30,783 - test_generic_pomdp - INFO - Free Energy: 0.0919
2025-02-06 14:42:30,783 - test_generic_pomdp - INFO - Free Energy: 0.0919
2025-02-06 14:42:30,784 - test_generic_pomdp - INFO - Updated beliefs: [9.16941211e-01 9.98243352e-17 8.30587886e-02]
2025-02-06 14:42:30,784 - test_generic_pomdp - INFO - Updated beliefs: [9.16941211e-01 9.98243352e-17 8.30587886e-02]
2025-02-06 14:42:32,848 - test_generic_pomdp - INFO -
Step 7/10
2025-02-06 14:42:32,848 - test_generic_pomdp - INFO -
Step 7/10
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Free Energy: -0.0126
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Free Energy: -0.0126
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Updated beliefs: [1.00000000e+00 9.95591513e-17 9.95591513e-17]
2025-02-06 14:42:32,855 - test_generic_pomdp - INFO - Updated beliefs: [1.00000000e+00 9.95591513e-17 9.95591513e-17]
2025-02-06 14:42:34,904 - test_generic_pomdp - INFO -
Step 8/10
2025-02-06 14:42:34,904 - test_generic_pomdp - INFO -
Step 8/10
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Free Energy: 0.1970
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Free Energy: 0.1970
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Updated beliefs: [9.89869277e-17 9.89869277e-17 1.00000000e+00]
2025-02-06 14:42:34,911 - test_generic_pomdp - INFO - Updated beliefs: [9.89869277e-17 9.89869277e-17 1.00000000e+00]
2025-02-06 14:42:36,816 - test_generic_pomdp - INFO -
Step 9/10
2025-02-06 14:42:36,816 - test_generic_pomdp - INFO -
Step 9/10
2025-02-06 14:42:36,822 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:36,822 - test_generic_pomdp - INFO - Observation: 1
2025-02-06 14:42:36,823 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:36,823 - test_generic_pomdp - INFO - Free Energy: 0.0149
2025-02-06 14:42:36,823 - test_generic_pomdp - INFO - Updated beliefs: [9.97670637e-17 9.97670637e-17 1.00000000e+00]
2025-02-06 14:42:36,823 - test_generic_pomdp - INFO - Updated beliefs: [9.97670637e-17 9.97670637e-17 1.00000000e+00]
2025-02-06 14:42:38,803 - test_generic_pomdp - INFO -
Step 10/10
2025-02-06 14:42:38,803 - test_generic_pomdp - INFO -
Step 10/10
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Free Energy: -0.0126
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Free Energy: -0.0126
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Updated beliefs: [1.00000000e+00 9.86071101e-17 9.86071101e-17]
2025-02-06 14:42:38,810 - test_generic_pomdp - INFO - Updated beliefs: [1.00000000e+00 9.86071101e-17 9.86071101e-17]
2025-02-06 14:42:41,872 - test_generic_pomdp - INFO -
Simulation Summary:
2025-02-06 14:42:41,872 - test_generic_pomdp - INFO -
Simulation Summary:
2025-02-06 14:42:41,872 - test_generic_pomdp - INFO - Average Free Energy: 0.2570
2025-02-06 14:42:41,872 - test_generic_pomdp - INFO - Average Free Energy: 0.2570
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Observation distribution: [2 4 1 3]
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Observation distribution: [2 4 1 3]
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Action distribution: [6 4]
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Action distribution: [6 4]
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Initial belief entropy: 1.0986
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Initial belief entropy: 1.0986
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Final belief entropy: -0.0000
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO - Final belief entropy: -0.0000
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO -
Generated visualization files:
2025-02-06 14:42:41,873 - test_generic_pomdp - INFO -
Generated visualization files:
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - A_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - A_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - D_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - D_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - matrix_overview.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - matrix_overview.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - belief_evolution.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - belief_evolution.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - E_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - E_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - C_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - C_matrix.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - action_distribution.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - action_distribution.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - free_energy.png
2025-02-06 14:42:41,874 - test_generic_pomdp - INFO - - free_energy.png
2025-02-06 14:42:41,875 - test_generic_pomdp - INFO - - B_matrix.png
2025-02-06 14:42:41,875 - test_generic_pomdp - INFO - - B_matrix.png
2025-02-06 14:42:41,875 - test_generic_pomdp - INFO -
Full simulation test completed successfully
2025-02-06 14:42:41,875 - test_generic_pomdp - INFO -
Full simulation test completed successfully
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Starting full configured simulation test
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Starting full configured simulation test
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Running simulation for 10 timesteps as configured
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Running simulation for 10 timesteps as configured
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Model Configuration:
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO -
Model Configuration:
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Observations: 4
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - States: 3
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Actions: 2
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:41,890 - test_generic_pomdp - INFO - Planning horizon: 4
2025-02-06 14:42:41,891 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:41,891 - test_generic_pomdp - INFO - Total timesteps: 10
2025-02-06 14:42:41,891 - test_generic_pomdp - INFO -
Timestep 1/10
2025-02-06 14:42:41,891 - test_generic_pomdp - INFO -
Timestep 1/10
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Free Energy: 0.0131
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Free Energy: 0.0131
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Updated beliefs: [9.94796229e-17 9.94796229e-17 1.00000000e+00]
2025-02-06 14:42:41,897 - test_generic_pomdp - INFO - Updated beliefs: [9.94796229e-17 9.94796229e-17 1.00000000e+00]
2025-02-06 14:42:41,898 - test_generic_pomdp - INFO -
Timestep 2/10
2025-02-06 14:42:41,898 - test_generic_pomdp - INFO -
Timestep 2/10
2025-02-06 14:42:41,904 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:41,904 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO - Free Energy: -0.4812
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO - Free Energy: -0.4812
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO - Updated beliefs: [0.01619846 0.0085596 0.97524194]
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO - Updated beliefs: [0.01619846 0.0085596 0.97524194]
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO -
Timestep 3/10
2025-02-06 14:42:41,905 - test_generic_pomdp - INFO -
Timestep 3/10
2025-02-06 14:42:41,914 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,914 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,914 - test_generic_pomdp - INFO - Free Energy: 0.6181
2025-02-06 14:42:41,914 - test_generic_pomdp - INFO - Free Energy: 0.6181
2025-02-06 14:42:41,915 - test_generic_pomdp - INFO - Updated beliefs: [2.46025173e-01 7.53974827e-01 9.94670652e-17]
2025-02-06 14:42:41,915 - test_generic_pomdp - INFO - Updated beliefs: [2.46025173e-01 7.53974827e-01 9.94670652e-17]
2025-02-06 14:42:41,915 - test_generic_pomdp - INFO -
Timestep 4/10
2025-02-06 14:42:41,915 - test_generic_pomdp - INFO -
Timestep 4/10
2025-02-06 14:42:41,922 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,922 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO - Free Energy: -0.5735
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO - Free Energy: -0.5735
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO - Updated beliefs: [9.94197622e-17 9.94197622e-17 1.00000000e+00]
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO - Updated beliefs: [9.94197622e-17 9.94197622e-17 1.00000000e+00]
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO -
Timestep 5/10
2025-02-06 14:42:41,923 - test_generic_pomdp - INFO -
Timestep 5/10
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Observation: 3
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Updated beliefs: [9.67484088e-17 1.00000000e+00 9.67484088e-17]
2025-02-06 14:42:41,929 - test_generic_pomdp - INFO - Updated beliefs: [9.67484088e-17 1.00000000e+00 9.67484088e-17]
2025-02-06 14:42:41,930 - test_generic_pomdp - INFO -
Timestep 6/10
2025-02-06 14:42:41,930 - test_generic_pomdp - INFO -
Timestep 6/10
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Free Energy: 0.4575
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Free Energy: 0.4575
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Updated beliefs: [2.57305264e-04 9.99742695e-01 9.99798744e-17]
2025-02-06 14:42:41,937 - test_generic_pomdp - INFO - Updated beliefs: [2.57305264e-04 9.99742695e-01 9.99798744e-17]
2025-02-06 14:42:41,938 - test_generic_pomdp - INFO -
Timestep 7/10
2025-02-06 14:42:41,938 - test_generic_pomdp - INFO -
Timestep 7/10
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Updated beliefs: [9.99516436e-17 1.00000000e+00 9.99516436e-17]
2025-02-06 14:42:41,946 - test_generic_pomdp - INFO - Updated beliefs: [9.99516436e-17 1.00000000e+00 9.99516436e-17]
2025-02-06 14:42:41,947 - test_generic_pomdp - INFO -
Timestep 8/10
2025-02-06 14:42:41,947 - test_generic_pomdp - INFO -
Timestep 8/10
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Free Energy: 0.4575
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Free Energy: 0.4575
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Updated beliefs: [2.57305264e-04 9.99742695e-01 9.99798744e-17]
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO - Updated beliefs: [2.57305264e-04 9.99742695e-01 9.99798744e-17]
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO -
Timestep 9/10
2025-02-06 14:42:41,953 - test_generic_pomdp - INFO -
Timestep 9/10
2025-02-06 14:42:41,959 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,959 - test_generic_pomdp - INFO - Observation: 2
2025-02-06 14:42:41,959 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,959 - test_generic_pomdp - INFO - Free Energy: 0.4059
2025-02-06 14:42:41,960 - test_generic_pomdp - INFO - Updated beliefs: [9.99516436e-17 1.00000000e+00 9.99516436e-17]
2025-02-06 14:42:41,960 - test_generic_pomdp - INFO - Updated beliefs: [9.99516436e-17 1.00000000e+00 9.99516436e-17]
2025-02-06 14:42:41,960 - test_generic_pomdp - INFO -
Timestep 10/10
2025-02-06 14:42:41,960 - test_generic_pomdp - INFO -
Timestep 10/10
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Observation: 0
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Free Energy: -0.5496
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Free Energy: -0.5496
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Updated beliefs: [7.64756147e-03 9.98005899e-17 9.92352439e-01]
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Updated beliefs: [7.64756147e-03 9.98005899e-17 9.92352439e-01]
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO -
Simulation Summary:
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO -
Simulation Summary:
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Average Free Energy: 0.1160
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Average Free Energy: 0.1160
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Observation distribution: [3 0 5 2]
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Observation distribution: [3 0 5 2]
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Most frequent observation: 2 (count: 5)
2025-02-06 14:42:41,966 - test_generic_pomdp - INFO - Most frequent observation: 2 (count: 5)
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Action distribution: [4 6]
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Action distribution: [4 6]
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Most frequent action: 1 (count: 6)
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Most frequent action: 1 (count: 6)
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Initial belief entropy: 1.0986
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Initial belief entropy: 1.0986
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Final belief entropy: 0.0449
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Final belief entropy: 0.0449
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Belief entropy reduction: 1.0537
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Belief entropy reduction: 1.0537
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO -
EFE Component Analysis:
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO -
EFE Component Analysis:
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Ambiguity: 0.8788
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Ambiguity: 0.8788
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Risk: 0.6426
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Risk: 0.6426
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Expected Preferences: 0.2222
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO - Average Expected Preferences: 0.2222
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO -
Full configured simulation completed successfully
2025-02-06 14:42:41,967 - test_generic_pomdp - INFO -
Full configured simulation completed successfully
2025-02-06 14:42:42,145 - test_generic_pomdp - INFO - Observation counts: [42 1 4 3]
2025-02-06 14:42:42,145 - test_generic_pomdp - INFO - Observation counts: [42 1 4 3]
2025-02-06 14:42:42,146 - test_generic_pomdp - INFO - Observation 0 count: 42
2025-02-06 14:42:42,146 - test_generic_pomdp - INFO - Observation 0 count: 42
2025-02-06 14:42:42,146 - test_generic_pomdp - INFO - Mean of other observations: 2.67
2025-02-06 14:42:42,146 - test_generic_pomdp - INFO - Mean of other observations: 2.67
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - Test session completed
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - Test session completed
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - ================================================================================
2025-02-06 14:42:42,186 - test_generic_pomdp - INFO - ================================================================================

Двоичные данные
Things/Generic_POMDP/Output/plots/A_matrix.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 119 KiB

После

Ширина:  |  Высота:  |  Размер: 128 KiB

Двоичные данные
Things/Generic_POMDP/Output/plots/B_matrix.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 151 KiB

После

Ширина:  |  Высота:  |  Размер: 226 KiB

Двоичные данные
Things/Generic_POMDP/Output/plots/C_matrix.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 112 KiB

После

Ширина:  |  Высота:  |  Размер: 100 KiB

Двоичные данные
Things/Generic_POMDP/Output/plots/D_matrix.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 66 KiB

После

Ширина:  |  Высота:  |  Размер: 60 KiB

Двоичные данные
Things/Generic_POMDP/Output/plots/E_matrix.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 54 KiB

После

Ширина:  |  Высота:  |  Размер: 63 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 59 KiB

После

Ширина:  |  Высота:  |  Размер: 66 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 272 KiB

После

Ширина:  |  Высота:  |  Размер: 208 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 216 KiB

После

Ширина:  |  Высота:  |  Размер: 240 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 206 KiB

После

Ширина:  |  Высота:  |  Размер: 232 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 227 KiB

После

Ширина:  |  Высота:  |  Размер: 226 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 227 KiB

После

Ширина:  |  Высота:  |  Размер: 223 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 206 KiB

После

Ширина:  |  Высота:  |  Размер: 234 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 226 KiB

После

Ширина:  |  Высота:  |  Размер: 226 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 212 KiB

После

Ширина:  |  Высота:  |  Размер: 223 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 217 KiB

После

Ширина:  |  Высота:  |  Размер: 224 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 227 KiB

После

Ширина:  |  Высота:  |  Размер: 233 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 227 KiB

После

Ширина:  |  Высота:  |  Размер: 238 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 91 KiB

После

Ширина:  |  Высота:  |  Размер: 91 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 91 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 90 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 90 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 94 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 90 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 91 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 91 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 90 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 90 KiB

После

Ширина:  |  Высота:  |  Размер: 90 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 121 KiB

После

Ширина:  |  Высота:  |  Размер: 127 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 337 KiB

После

Ширина:  |  Высота:  |  Размер: 324 KiB

Просмотреть файл

@ -4,15 +4,20 @@
model:
name: "Generic POMDP"
description: "Generic POMDP implementation using Active Inference"
version: "1.0.0"
version: "1.0.3"
temporal_preferences: true # Enable temporal preference handling
learning_mode: "continuous" # Enable continuous learning
# Space Dimensions
dimensions:
observations: 4
states: 3
actions: 2
observations: 5
states: 4
actions: 3
total_timesteps: 10 # Total number of timesteps to run simulation
planning_horizon: 4 # Number of timesteps to look ahead for policy evaluation
memory_length: 5 # Number of past observations to consider for belief updates
temporal_discount: 0.95 # Discount factor for future preferences
belief_momentum: 0.8 # Added belief momentum for smoother updates
# Matrix Specifications
matrices:
@ -21,54 +26,179 @@ matrices:
constraints:
- "column_stochastic"
- "non_negative"
- "sparse" # Added sparsity constraint for more distinct state-observation mappings
sparsity_params:
target_sparsity: 0.7
min_value: 0.05
validation:
max_condition_number: 1e6
min_eigenvalue: 1e-6
learning:
enabled: true
rate: 0.1
momentum: 0.9
B_matrix: # Transition model P(s'|s,a)
initialization: "identity_based"
initialization_params:
strength: 0.8 # Strength of identity component
noise: 0.1 # Added noise parameter for exploration
temporal_coherence: 0.9 # Higher values make state transitions more temporally coherent
constraints:
- "column_stochastic"
- "non_negative"
- "temporal_consistency" # Added constraint for temporal consistency
validation:
max_condition_number: 1e6
min_eigenvalue: 1e-6
learning:
enabled: true
rate: 0.05
momentum: 0.95
C_matrix: # Preferences over observations
initialization: "uniform"
initialization: "temporal_goal_directed" # Changed to temporal goal-directed initialization
initialization_params:
default_value: 0.0
goal_states: [0, 3] # Specify preferred goal states
goal_value: 1.0
avoid_value: -1.0
temporal_profile: "increasing" # Preferences increase over time
temporal_scale: 1.5 # Scale factor for temporal preference increase
constraints:
- "finite_values"
- "bounded"
- "temporal_coherence" # Added temporal coherence constraint
bounds:
min: -2.0
max: 2.0
temporal_params:
horizon_weight: 1.2 # Weight future preferences more heavily
smoothing: 0.1 # Smooth preference transitions
learning:
enabled: true
rate: 0.2
momentum: 0.8
D_matrix: # Prior beliefs over states
initialization: "uniform"
initialization: "informed" # Changed to informed initialization
initialization_params:
concentration: 1.0 # Dirichlet concentration parameter
temporal_bias: 0.2 # Bias towards temporally consistent states
constraints:
- "normalized"
- "non_negative"
- "minimum_entropy" # Added minimum entropy constraint
min_entropy: 0.5
learning:
enabled: true
rate: 0.15
momentum: 0.85
E_matrix: # Prior over policies
initialization: "uniform"
initialization: "temporal_softmax" # Changed to temporal-aware softmax
initialization_params:
temperature: 1.0
temporal_bonus: 0.2 # Bonus for temporally coherent policies
constraints:
- "normalized"
- "non_negative"
- "entropy_regularized" # Added entropy regularization
entropy_weight: 0.1
learning:
enabled: true
rate: 0.1
momentum: 0.9
# Inference Parameters
inference:
learning_rate: 0.1
temperature: 1.0
learning_rate: 0.05 # Reduced learning rate for more stable updates
temperature: 0.8 # Reduced temperature for more focused exploration
convergence_threshold: 1e-6
max_iterations: 100
belief_update:
method: "gradient_descent"
method: "variational" # Changed to variational inference
momentum: 0.9
adaptive_lr: true
min_lr: 1e-4
max_lr: 0.5
window_size: 10
smoothing_factor: 0.2 # Added smoothing for belief updates
regularization: 0.01 # Added regularization
policy_selection:
method: "temporal_softmax"
temperature: 1.0
exploration_bonus: 0.1
temporal_horizon_bonus: 0.2 # Bonus for policies with good long-term outcomes
# Learning Parameters
learning:
enabled: true
type: "online"
parameters:
memory_decay: 0.95
learning_rate_decay: 0.999
min_learning_rate: 1e-4
exploration_decay: 0.995
min_exploration: 0.05
belief_momentum: 0.9 # Added belief momentum
temporal_smoothing: 0.2 # Added temporal smoothing
regularization:
type: "l2"
strength: 0.01
temporal_coherence: 0.1 # Added temporal coherence regularization
curriculum:
enabled: true
difficulty_increase_rate: 0.1
max_difficulty: 1.0
adaptive_pacing: true # Added adaptive pacing
temporal:
sequence_length: 5
prediction_horizon: 3
sequence_weight: 0.3
consistency_weight: 0.2 # Added consistency weight
belief_update:
method: "momentum" # Changed to momentum-based updates
momentum: 0.9
step_size: 0.1
regularization: 0.01
# Analysis Parameters
analysis:
enabled: true
metrics:
- "belief_entropy"
- "free_energy"
- "accuracy"
- "temporal_consistency"
- "preference_satisfaction"
- "learning_progress" # Added learning progress metric
temporal_analysis:
window_size: 5
overlap: 2
metrics:
- "state_transitions"
- "observation_sequences"
- "belief_trajectories"
- "learning_curves" # Added learning curves
information_theory:
compute_mutual_info: true
compute_kl_divergence: true
compute_entropy_rate: true
temporal_dependencies: true # Added temporal dependencies
# Numerical Parameters
numerical:
stability_threshold: 1e-12
max_condition_number: 1e6
gradient_clip: 10.0
belief_clip: [1e-7, 1.0]
precision_scaling: true
precision_params:
initial: 1.0
learning_rate: 0.1
min_value: 0.1
max_value: 10.0
# Output Settings
output:
@ -78,13 +208,19 @@ output:
plots: "plots/"
test_results: "test_results/"
simulations: "simulations/"
checkpoints: "checkpoints/" # Added checkpoints directory
analysis: "analysis/" # Added analysis directory
file_formats:
plots: ["png", "pdf"]
data: ["csv", "json"]
checkpoints: ["pt", "npz"]
analysis: ["json", "yaml"]
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
file: "Output/logs/simulation.log"
rotation: "1 day"
backup_count: 7
# Visualization Settings
visualization:
@ -98,11 +234,34 @@ visualization:
- "free_energy"
- "action_probabilities"
- "observation_counts"
- "learning_curves" # Added learning curves
- "belief_trajectories" # Added belief trajectories
- "state_transitions" # Added state transition visualization
- "temporal_preference_heatmap" # Added temporal preference visualization
- "policy_evaluation_over_time" # Added policy evaluation visualization
- "information_theory_metrics" # Added information theory metrics
- "temporal_consistency_analysis" # Added temporal consistency analysis
style:
colormap: "viridis"
figure_size: [10, 6]
font_size: 12
dpi: 300
grid: true
legend_location: "best"
interactive:
enabled: true
backend: "plotly"
temporal_plots:
enabled: true
types:
- "state_sequence_diagram"
- "belief_flow"
- "preference_evolution"
- "policy_tree"
animation:
enabled: true
fps: 2
duration: 10
# Testing Configuration
testing:
@ -112,13 +271,37 @@ testing:
initialization:
- "matrix_properties"
- "state_initialization"
- "constraint_satisfaction"
- "temporal_consistency" # Added temporal consistency test
dynamics:
- "belief_updates"
- "action_selection"
- "observation_generation"
- "temporal_consistency"
- "sequence_prediction" # Added sequence prediction test
learning:
- "belief_convergence"
- "policy_improvement"
- "exploration_decay"
- "temporal_credit_assignment" # Added temporal credit assignment test
convergence:
- "free_energy_minimization"
- "belief_convergence"
- "learning_stability"
- "temporal_stability" # Added temporal stability test
numerical:
- "stability"
- "precision"
- "precision"
- "gradient_properties"
- "temporal_coherence" # Added temporal coherence test
benchmarks:
enabled: true
metrics:
- "belief_update_time"
- "policy_evaluation_time"
- "learning_convergence_rate"
- "temporal_prediction_accuracy" # Added temporal prediction metric
baseline_performance:
max_update_time_ms: 50
min_convergence_rate: 0.1
min_temporal_accuracy: 0.7 # Added temporal accuracy threshold