Setup
The Lab
This is the infrastructure behind the experiments—the hardware and software powering this journey.
Hardware
I work across two machines, each optimized for different purposes:
MacBook Pro (M1 Pro) - SRK MacBook Pro
Processor:
- Apple M1 Pro chip
- 8 cores (6 performance, 2 efficiency)
- Built for efficiency and stability
Memory:
- 16 GB unified memory
- Shared between CPU and GPU
Operating System:
- macOS 15.1 (Sequoia)
- Darwin kernel 25.1.0
Why this machine: My reliable daily driver. Handles writing, general development, and everyday tasks flawlessly. When I need stability and “it just works,” this is the machine.
ASUS ROG Flow Z13 (GENESISFLOWZ13)
Processor:
- AMD RYZEN AI MAX+ 395 w/ Radeon 8060S
- 16 cores, 32 logical processors @ 3.0 GHz
- Latest AMD Ryzen AI architecture
Memory:
- 128 GB RAM (126.6 GB usable)
- 8 x 16GB Micron modules @ 8533MHz
- Capable of running 70B+ local LLM models
Graphics:
- AMD Radeon 8060S (integrated)
- 4GB VRAM
Storage:
- 1TB NVMe SSD (Micron)
- High-performance SCSI interface
Network:
- MediaTek Wi-Fi 7 MT7925
- Cutting-edge wireless (1.2 Gbps link speed)
Operating System:
- Kubuntu 25.10 (KDE Plasma 6) - migrated from Windows 11 Pro in November 2024
- Better thermal management for AMD hardware
- Native Linux development environment
Why this machine: The AI compute powerhouse. 128GB RAM lets me run large local LLMs (70B+ models), intensive experiments, and heavy processing without cloud dependencies.
Development Environment
Core Tools:
- Node.js v22.21.0
- Python 3.12.10
- Git 2.51.2
- VS Code 1.105.1
- GitHub CLI (gh) 2.81.0
AI Infrastructure:
- Claude Code CLI (Anthropic)
- LM Studio (local LLM hosting)
- Ollama 0.12.6
- Whisper (faster-whisper implementation)
MCP Servers (6 active):
- Filesystem MCP - File system operations
- Git MCP - Repository management
- GitHub MCP - Remote repo access (OAuth)
- Memory MCP - Knowledge graph tracking
- Fetch MCP - Web content retrieval
- Shared Context MCP - Cross-machine state sync
Why This Setup?
High RAM: 128GB enables running multiple large language models locally without cloud dependencies. True AI experimentation requires the ability to run models on your own hardware.
Latest Architecture: AMD Ryzen AI MAX+ with integrated NPU for potential AI acceleration. Built for the next generation of AI workloads.
Local-First: Everything runs locally first. Cloud is for deployment, not dependency. This means full control, privacy, and the ability to work offline.
Cross-Platform: Kubuntu Linux on Z13, macOS on MacBook Pro. Building workflows that sync seamlessly across both via GitHub.
Acknowledgments
This setup and journey are inspired by conversations with Aaron Stransky at Magic Unicorn—a systems architect building genuinely advanced infrastructure. Watching someone operate at that level raises your own standards and ambitions.
Updated: October 2025