Grounded Multi-Agent Testbed: LLM Agents in Discrete Simulated Environments #154
Labels
No Label
Alpha Release Requirement
Bugfix
Demo Target
Documentation
Major Feature
Minor Feature
priority:tier1-active
priority:tier2-foundation
priority:tier3-future
Refactoring & Cleanup
system:animation
system:documentation
system:grid
system:input
system:performance
system:python-binding
system:rendering
system:ui-hierarchy
Tiny Feature
workflow:blocked
workflow:needs-benchmark
workflow:needs-documentation
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: john/McRogueFace#154
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Grounded Multi-Agent Testbed
Umbrella issue for research infrastructure enabling LLM agents to operate in McRogueFace environments.
Overview
This project implements a testbed for studying grounded language understanding in AI systems. Based on cognitive neuroscience research suggesting that language understanding requires integration of perceptual simulation, world knowledge, and situation modeling (Casto et al., 2025), we use McRogueFace's discrete roguelike environment to study how language agents learn affordances, develop theories of mind, and generalize across novel situations.
Three-Level Architecture
Level 1: Environment Comprehension (Base Physics)
Engine support: Mostly complete via
mcrfpy.libtcodLevel 2: Game Behavior Comprehension (Middle Level)
Engine support: Entity event system exists (
bump,ev_enter,ev_exit), needs formalizationLevel 3: Equal Agent Comprehension (High Level)
Engine support: Requires new infrastructure
Two Operating Modes
Headless Simulation Mode
Animated Demo Mode
Research Questions (from proposal)
Development Phases
Phase 1: Two Agents in a Room
Phase 2: Learning Middle-Level Behaviors
Phase 3: Affordance Learning and Puzzles
Phase 4: Economic Reasoning
Phase 5: Town and Dungeon Integration
Blocking Issues
Engine infrastructure required:
Project-specific issues:
Related Issues
References
Progress Update: FOV/Perspective System Complete
Commits
c5b4200anda529e5eimplement the per-agent perspective rendering infrastructure:New API
FOV Configuration:
mcrfpy.FOVenum (BASIC, DIAMOND, SHADOW, PERMISSIVE_0-8, RESTRICTIVE, SYMMETRIC_SHADOWCAST)mcrfpy.default_fovmodule propertygrid.fovandgrid.fov_radiuspropertiesColorLayer Perspective Methods:
fill_rect(x, y, w, h, color)- Fill rectangular regiondraw_fov(source, radius, fov, visible, discovered, unknown)- One-time FOV visualizationapply_perspective(entity, visible, discovered, unknown)- Bind layer to entityupdate_perspective()- Refresh from bound entity's gridstateclear_perspective()- Remove bindingEntity Methods:
entity.update_visibility()- Updates gridstate AND all bound ColorLayersentity.visible_entities(fov=None, radius=None)- Get list of visible entitiesDemo
tests/demo/perspective_patrol_demo.py- Interactive fog of war demonstrationRelevance to Phase 1
This provides the foundation for "per-agent perspective rendering for VLM input" mentioned in Level 3. Each agent can have its own ColorLayer showing what it can see, and
visible_entities()enables AI decision-making based on line-of-sight.