Developer Guide
Fork it. Run it. Build on it. Contribute to a 366-engine autonomous AI system.
Requirements
Python 3.10+Engine runtime
OllamaLocal AI inference
GitVersion control
~2GB RAMMinimum for core
That's all you need for the zero-secret core (283 engines). Optional API keys unlock additional engines for web search, social media, and external integrations.
Quick Start: Fork & Run
Step 1: Clone the repository
git clone https://github.com/meekotharaccoon-cell/meeko-nerve-center.git
cd meeko-nerve-center
Step 2: Install Ollama
# Linux/Mac
curl -fsSL https://ollama.ai/install.sh | sh
# Windows
# Download from https://ollama.ai/download
# Pull a model
ollama pull llama3.2
Step 3: Run a single engine
cd mycelium
python LIVE_WIRE_SCANNER.py
This runs the wire scanner, which maps all engine connections and writes results to data/live_wire_report.json.
Step 4: Run the full OMNIBUS cycle
python OMNIBUS.py
This executes the complete AUTO_GENESIS loop: scan, discover, generate, wire, heal, evolve. One cycle typically takes 5-15 minutes depending on gap count and model speed.
Project Structure
meeko-nerve-center/
mycelium/ # All engines live here (366 .py files)
data/ # Shared state (JSON data files, 424 connected)
docs/ # GitHub Pages site (dashboards, products)
products/ # Product content (guides, templates, bundles)
AUTO_EXEC.ps1 # Windows auto-execution script
OMNIBUS.py # Master orchestrator (not in mycelium/)
data/live_wire_report.json # Wire topology map
data/system_health.json # Current health status
data/chimera_evolution_report.json # Evolution metrics
How to Write an Engine
Every engine follows the same pattern: read inputs, process, write outputs. Here is the minimal template:
"""
MY_NEW_ENGINE.py
Description: What this engine does
Inputs: data/some_input.json
Outputs: data/my_output.json
Zero-Secret: True
"""
import json
import os
from datetime import datetime, timezone
DATA_DIR = os.path.join(os.path.dirname(__file__), '..', 'data')
def run():
# Read input
input_path = os.path.join(DATA_DIR, 'some_input.json')
if os.path.exists(input_path):
with open(input_path, 'r') as f:
data = json.load(f)
else:
data = {}
# Process
result = {
"processed_at": datetime.now(timezone.utc).isoformat(),
"input_records": len(data),
"status": "ok"
}
# Write output
output_path = os.path.join(DATA_DIR, 'my_output.json')
with open(output_path, 'w') as f:
json.dump(result, f, indent=2)
return result
if __name__ == '__main__':
print(run())
Engine Naming Conventions
| Pattern | Meaning | Example |
| ALL_CAPS.py | Standalone engine | REVENUE_MONITOR.py |
| GENERATED_*.py | Auto-generated by AUTO_GENESIS | GENERATED_SELF_BUILDER_CONSUMER.py |
| LEGACY_SIFTED_*.py | Migrated from legacy codebase | LEGACY_SIFTED_money_tracker.py |
Key Rules
- Always include a run() function -- this is what the orchestrator calls
- Declare inputs/outputs in the module docstring so the wire scanner can find them
- Use data/ for all state -- never hardcode absolute paths
- Handle missing input files gracefully (the file may not exist yet on first run)
- Write valid JSON to output files -- malformed output triggers the immune system
- Set Zero-Secret: True if no API keys are needed
Warning: Never commit API keys, tokens, or credentials to the repository. The SECRETS_CHECKER engine scans for leaked secrets and will flag violations. Use environment variables for any sensitive values.
How Wiring Works
You do not need to manually wire engines. The LIVE_WIRE_SCANNER automatically detects connections:
- It scans each engine's source code for file reads (json.load, open(), etc.)
- It scans for file writes (json.dump, open('w'), etc.)
- When Engine A writes data/foo.json and Engine B reads data/foo.json, a wire is created
- Wires are classified as zero-secret or keyed based on whether either engine requires API keys
After adding a new engine, run the wire scanner to see it appear in the topology:
python mycelium/LIVE_WIRE_SCANNER.py
# Check the result:
python -c "import json; d=json.load(open('data/live_wire_report.json')); print(f'Total wires: {d[\"stats\"][\"total_wires_discovered\"]}')"
Contributing
Pull Request Workflow
- Fork the repository on GitHub
- Create a feature branch: git checkout -b my-new-engine
- Write your engine in mycelium/
- Test it locally: python mycelium/MY_ENGINE.py
- Run the wire scanner to verify connectivity
- Commit and push
- Open a pull request against main
Ideas for New Engines
- Data enrichment: Engines that read existing data files and add derived insights
- Content generation: Engines that produce blog posts, social media content, or documentation
- Health monitoring: Engines that check system health metrics and alert on anomalies
- Revenue streams: Engines that discover and integrate new platforms for product distribution
- Crisis detection: Engines that monitor news feeds for humanitarian emergencies
- Translation: Engines that translate content into additional languages
Customizing Revenue Routing
If you fork SolarPunk for your own cause, update the aid routing configuration:
# data/aid_routing.json
{
"primary_beneficiary": "YOUR_CHARITY_NAME",
"primary_url": "https://your-charity.org/donate",
"split": {"mutual_aid": 0.99, "operational": 0.01},
"verified": true
}
Architecture Deep Dives