Cognitive Ecosystem Modeling Framework
A comprehensive framework for modeling cognitive ecosystems using active_inference, integrated with obsidian_linking for knowledge management.
Overview
This project combines cognitive modeling with knowledge management to create a powerful framework for:
- Modeling agent behaviors using active_inference
- Managing complex knowledge_organization
- Visualizing and analyzing cognitive_phenomena
- Simulating multi-agent interactions
Project Structure
See ai_folder_structure for comprehensive directory organization.
📁 templates/ # Template definitions
├── node_templates/ # Base templates for cognitive nodes
│ ├── agent_template.md # See [[ai_concept_template]]
│ ├── belief_template.md
│ └── ...
│
📁 knowledge_base/ # Knowledge structure
├── cognitive/ # Core cognitive concepts
├── agents/ # Agent definitions
├── beliefs/ # Belief networks
└── ...
📁 src/ # Source code
├── models/ # Core modeling components
├── utils/ # Utility functions
└── analysis/ # Analysis tools
📁 docs/ # Documentation (See [[documentation_standards]])
📁 tests/ # Test suite (See [[testing_guide]])
📁 data/ # Data storage
Features
Knowledge Management
Cognitive Modeling
Analysis & Visualization
Knowledge Integration Architecture
Bidirectional Knowledge Graph
The framework leverages obsidian_linking to create a living knowledge graph that:
- Enforces validation_framework
- Enables ai_validation_framework of relationships
- Supports machine_readability of dependencies
- Facilitates research_education
Link Types and Semantics
See linking_completeness for comprehensive linking patterns.
-
Theoretical Dependencies
[[measure_theory]] → [[probability_theory]] → [[stochastic_processes]]- Enforces prerequisite knowledge
- Validates theoretical foundations
- Ensures consistent notation
-
Implementation Dependencies
[[active_inference]] → [[belief_updating]] → [[action_selection]]- Tracks computational requirements
- Maintains implementation consistency
- Documents design decisions
-
Validation Links
[[testing_guide]] → [[validation_framework]] → [[quality_metrics]]- Ensures rigorous testing
- Maintains quality standards
- Documents validation procedures
Probabilistic Programming Integration
Graph Structure Mapping
The repository's link structure directly maps to probabilistic graphical models:
# Example: Converting knowledge links to Bayesian Graph
def build_bayesian_graph(knowledge_base: Path) -> BayesianNetwork:
"""Convert knowledge base links to Bayesian Network.
See [[ai_semantic_processing]] for details.
"""
graph = BayesianNetwork()
# Extract links and dependencies
for file in knowledge_base.glob('**/*.md'):
links = extract_links(file)
nodes = create_nodes(links)
edges = create_edges(links)
# Add to graph with conditional probabilities
graph.add_nodes(nodes)
graph.add_edges(edges)
return graph
Implementation Patterns
See package_documentation for detailed implementation guidelines.
-
Direct Specification
# In matrix_specification.md matrix: type: observation dimensions: [num_states, num_observations] distribution: categorical parameters: prior: dirichlet concentration: [1.0, ..., 1.0] -
Probabilistic Annotations
@probabilistic_model class ObservationModel: """Implementation with probabilistic annotations. See [[predictive_processing]] for theoretical background. """ def __init__(self): self.A = PyroMatrix(dims=['states', 'obs']) def forward(self, state): return pyro.sample('obs', dist.Categorical(self.A[state])) -
Inference Specifications
inference: method: variational guide: mean_field optimizer: adam parameters: learning_rate: 0.01 num_particles: 10
Knowledge Base Integration
Automated Validation
def validate_knowledge_base():
"""Validate theoretical consistency.
See [[ai_validation_framework]] for details.
"""
# Check link consistency
validate_theoretical_dependencies()
# Verify probabilistic specifications
validate_probability_constraints()
# Test implementation coherence
validate_implementation_patterns()
Learning Pathways
See research_education for comprehensive learning integration.
-
Theory: Follow theoretical dependency chains
[[measure_theory]] → [[probability_theory]] → [[active_inference]] -
Implementation: Track implementation requirements
[[matrix_design]] → [[numerical_methods]] → [[optimization]] -
Validation: Ensure testing coverage
[[unit_tests]] → [[integration_tests]] → [[system_validation]]
Meta-Programming Capabilities
Code Generation
def generate_model_code(spec_file: Path) -> str:
"""Generate implementation from specifications.
See [[ai_documentation_style]] for code generation patterns.
"""
# Parse markdown specifications
spec = parse_markdown_spec(spec_file)
# Extract probabilistic model
model = extract_probabilistic_model(spec)
# Generate implementation
return generate_implementation(model)
Validation Rules
def check_probabilistic_consistency():
"""Verify probabilistic consistency.
See [[validation_framework]] for validation rules.
"""
# Check matrix constraints
verify_stochastic_matrices()
# Validate probability measures
verify_measure_consistency()
# Check inference specifications
verify_inference_methods()
Benefits
-
Theoretical Consistency
- ai_validation_framework of mathematical relationships
- Enforcement of probabilistic constraints
- Verification of implementation patterns
-
Learning Support
- research_education of concepts
- Clear dependency tracking
- Interactive knowledge discovery
-
Implementation Quality
- ai_documentation_style
- Consistent design patterns
- testing_guide
-
Documentation Integration
Getting Started
-
Setup Environment
python -m venv venv source venv/bin/activate # or `venv\Scripts\activate` on Windows pip install -r requirements.txt -
Configure Project
- Edit
config.yamlfor project settings - Customize templates in
templates/ - Set up obsidian_linking
- Edit
-
Create Cognitive Models
- Use ai_concept_template to define agents
- Configure belief networks
- Set up observation spaces
- Define action policies
-
Run Simulations
- Execute model simulations
- Analyze results
- Visualize networks
Testing
See testing_guide for comprehensive testing documentation.
Running Tests
# Run all tests with verbose output
python -m pytest -v
# Run specific test file
python -m pytest tests/test_matrix_ops.py -v
# Run tests with coverage report
python -m pytest --cov=src
Test Organization
tests/test_matrix_ops.py: Matrix operation teststests/test_visualization.py: Visualization component teststests/conftest.py: Shared test fixtures and configuration
Development
Contributing
See contribution_guide for detailed contribution guidelines.
Documentation
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Active Inference research community
- Obsidian development team
- Contributors and maintainers