amogus/docs/design_rendering.md

165 lines
4.1 KiB
Markdown

# Ray-Traced FOV & Video Rendering Pipeline
## Current State
- Vision is **room-based** (graph traversal)
- Player sees everyone in same room
- No pixel-level visibility, no wall occlusion
## Target State
- **Ray-traced FOV** matching real Among Us
- Walls block visibility
- Circular vision radius from player
- Light sabotage reduces radius
- Video output at **60fps** for YouTube
---
## Part 1: Ray-Traced FOV
### Map Data
Current `skeld.json` has rooms + edges (corridors). Need to add:
```json
{
"walls": [
{"p1": [100, 50], "p2": [100, 200]},
{"p1": [100, 200], "p2": [250, 200]}
],
"spawn_positions": {"cafeteria": [300, 400]}
}
```
### Visibility Algorithm
1. **Cast rays** from player position in 360° (e.g., 360 rays)
2. **Intersect** each ray with wall segments
3. **Closest intersection** per ray = vision boundary
4. **Vision radius** clamps max distance
5. **Player visible** if: within radius AND not occluded by walls
### Implementation
- `src/engine/vision_raycast.py` — new module
- `RaycastVisionSystem.get_visible_players(observer_pos, all_players, walls)`
- Returns: list of visible player IDs + positions
---
## Part 2: Engine Changes
### Current Position Model (`types.py`)
```python
@dataclass
class Position:
room_id: Optional[str] = None # Discrete room
edge_id: Optional[str] = None # Walking between rooms
progress: float = 0.0 # 0.0-1.0 on edge
```
### New Position Model
```python
@dataclass
class Position:
x: float = 0.0 # Pixel X
y: float = 0.0 # Pixel Y
room_id: Optional[str] = None # Derived from position
def distance_to(self, other: "Position") -> float:
return math.sqrt((self.x - other.x)**2 + (self.y - other.y)**2)
```
### New Map Data (`skeld.json`)
```json
{
"rooms": [...],
"edges": [...],
"walls": [
{"p1": [100, 50], "p2": [100, 200]},
...
],
"room_polygons": {
"cafeteria": [[x1,y1], [x2,y2], ...],
...
},
"spawn_points": {
"cafeteria": [300, 400],
...
}
}
```
### New Module: `vision_raycast.py`
```python
class RaycastVision:
def __init__(self, walls: list[Wall], vision_radius: float):
...
def is_visible(self, from_pos: Position, to_pos: Position) -> bool:
"""True if line-of-sight exists (no wall occlusion)."""
...
def get_visible_players(self, observer: Position,
all_players: list[Player]) -> list[Player]:
"""Returns players within vision radius AND line-of-sight."""
...
def get_vision_polygon(self, observer: Position) -> list[tuple]:
"""For rendering: polygon representing visible area."""
...
```
### Wall Intersection Algorithm
```python
def ray_intersects_wall(ray_start, ray_dir, wall_p1, wall_p2) -> float | None:
"""Returns distance to intersection, or None if no hit."""
# Standard line-segment intersection math
```
---
## Part 3: Rendering Pipeline
### Frame Generation (60fps)
```
replay.json → Renderer → frames/0001.png, 0002.png, ...
```
### Per Frame:
1. Draw map background (Skeld PNG)
2. Apply FOV mask (ray-traced vignette)
3. Draw player sprites at interpolated positions
4. Draw bodies
5. Overlay effects (kill, vent, sabotage)
### Video Assembly
```bash
ffmpeg -framerate 60 -i frames/%04d.png -c:v libx264 output.mp4
```
---
## Part 4: Assets
| Asset | Format | Source |
|-------|--------|--------|
| Skeld map | PNG | Fan art / game extract |
| Crewmate sprites | Spritesheet | Available online |
| Kill animations | Sprite sequence | Extract or recreate |
| Meeting UI | HTML/PNG | Recreate |
---
## Implementation Order
1. **Map upgrade** — Add walls + pixel coords
2. **Raycast vision**`vision_raycast.py`
3. **Pixel positions** — Upgrade engine to (x,y)
4. **Path interpolation** — Smooth walking
5. **Frame renderer** — Pillow/Pygame
6. **Meeting renderer** — Overlay
7. **FFmpeg integration** — Stitch to video
## Questions
1. **POV style**: Single player POV, or omniscient?
2. **Internal thoughts**: Show as subtitles?
3. **TTS**: Voice for dialogue?