Python ·
C++ (AnimNodes) ·
Character Rigging ·
Shaders ·
Substance Painter
Familiar
Houdini ·
Unity ·
ZBrush ·
After Effects ·
Blender
Production
Perforce ·
Shotgun ·
Jira ·
Git ·
Confluence
Specializations
Motion Capture Systems ·
Pipeline Automation ·
Real-Time Rendering ·
Team Leadership ·
AI Integration
About
Technical Artist with 4+ years of production experience.
Built the real-time mocap pipeline at SuperPlastic that shipped 256+ videos
to 232M views (1M subscribers in one month).
Led 8 artists from offline to real-time UE5 rendering.
Currently focused on AI-assisted content creation workflows.
Looking for Technical Artist or Creative Technologist roles.
Open to relocation.
SuperPlastic needed to produce high-volume animated content for YouTube Shorts
but was bottlenecked by 4-hour RenderMan renders per shot. Traditional offline
rendering couldn't scale to social media's pace.
The Solution
Built end-to-end real-time mocap pipeline in Unreal Engine 5. Managed mocap lab
(Manus gloves, Xsens suit, Faceware) with weekly capture sessions and 2 actors.
Wrote a custom C++ AnimNode that handled full-body and face retargeting at runtime.
Reduced turnaround from 4 hours to 5 minutes per shot.
Technical Highlight: C++ AnimNode
Maya rigs had 20+ bones per limb — impossible to export cleanly to UE5. Built a
custom engine-level AnimNode that handled body retargeting, face retargeting,
finger jitter correction, and bone locking. Also built a morph target library
for facial expressions beyond Faceware's capture range, keeping characters on-model.
Same node supported both live mocap streaming and baked Maya animation playback.
Hand-drawn storyboards enable rapid visualization in animation and game development, but translating 2D sketches into 3D scenes requires considerable manual effort. This thesis presents StoryboardTo3D, an Unreal Engine 5.6 plugin that automates storyboard-to-3D translation using vision-language models. The system implements a multi-angle capture framework (7 viewpoints) and iterative positioning refinement with AI feedback loops.
The Problem
Translating 2D storyboards into 3D scenes requires hours of manual camera placement, object positioning, and depth interpretation. Existing research prototypes convert text-based screenplays into preliminary scenes but do not efficiently translate visual storyboard sketches into 3D layouts using production asset libraries.
The Solution
An Unreal Engine 5.6 plugin with modular AI provider abstraction supporting three backends (Claude Sonnet 4.5 Extended Thinking, ChatGPT-4o, LLaVA-13B). The system's primary value is time-shifted automation: users initiate positioning processes that run unattended—including overnight—and return to completed 3D scenes.
System Pipeline
Seven-stage iterative refinement loop: analyze → generate → capture multi-angle → AI feedback → adjust → repeat until convergence.
Example Output
Input storyboard
Generated 3D scene
Plugin Interface
Multi-Angle Capture Architecture
Seven-viewpoint sequential capture system provides comprehensive spatial context for AI analysis. Six scout camera positions (front, back, left, right, top, 3/4 view) plus one hero camera expose positioning errors invisible from single viewpoints—depth misjudgments, occlusion issues, rotation errors.
Figure 5.4: Camera perspectives from a single iteration—front, right, back, left, top, 3/4, and hero views.
Research Scope
36
Test Scenarios
3
VLMs Compared
12
Storyboard Panels
7
Camera Viewpoints
Novel Finding: AI Score Hallucination
Expert validation revealed a 37.2% calibration gap between AI self-reported confidence (84.4% average) and actual positioning success (47.2% average). Vision-language models systematically assign high confidence scores to objectively inaccurate spatial compositions.
Figure 5.3: The 37.2% gap between what AI models report and actual positioning quality.
Model Performance Comparison
Expert validation across 12 panels × 3 models revealed dramatically different calibration patterns despite similar self-reported scores.
Model
Actual Success
Self-Reported
Calibration Error
Time/Panel
Cost/Panel
Claude Sonnet 4.5
83.3%
84.8%
+1.5%
69.1 min
$0.94
LLaVA-13B
41.7%
84.6%
+42.9%
7.8 min
$0.00
ChatGPT-4o
16.7%
83.8%
+67.1%
33.7 min
$0.25
Cost vs Quality Tradeoff
Three-dimensional tradeoff visualization for production decision-making: cost per scene, processing time, and expert-validated success rate.
Figure 6.1: Claude achieves highest quality (83.3%) at highest cost ($0.94/panel). LLaVA offers free local inference with moderate success (41.7%).
Panel-by-Panel Analysis
Granular success comparison across all 12 test panels × 3 models reveals per-scenario performance patterns.
All three models failed positioning on the same geometry, yet reported dramatically different confidence scores. ChatGPT-4o and LLaVA-13B scored 85/100 claiming "nearly perfect" positioning with "no visibility issues." Claude scored 70/100 and correctly identified "CRITICAL MISMATCH" with camera occlusion by bench.
Panel 9: Identical geometry, divergent AI assessments. Only Claude identified the critical occlusion failure.
Research Contributions
System Contribution
Minimum viable product demonstrating automated storyboard-to-3D conversion with modular AI provider abstraction.
Research Contribution
Discovery and documentation of AI Score Hallucination—systematic over-confidence in VLM spatial reasoning with implications for autonomous termination logic.
Architecture Contribution
Seven-viewpoint sequential capture system providing comprehensive spatial context for AI analysis of positioning errors.
[VIDEO: Full workflow, concept → mocap-ready character]
AI Character to Mocap Pipeline
Role:Developer
Type:Workflow R&D
Time:~1 Hour End-to-End
The Problem
Traditional character pipelines (concept → model → rig → mocap setup) take
days to weeks. Small teams and indie developers can't afford this for
prototyping or rapid content creation.
The Solution
Developed a workflow using AI-assisted modeling (Rodin AI) combined with
automatic rigging to produce mocap-ready characters with dynamics (tail simulation)
in approximately one hour. Demonstrated full integration with Xsens capture system.
Tools
Rodin AIUnreal Engine 5Xsens MVNAuto-Rigging
[VIDEO: Gameplay showing voice commands]
Voice-Controlled Robot Combat
Role:Developer
Context:Drexel University
Engine:Unity
The Challenge
Create a game with novel interaction that doesn't rely on traditional
controller input. Explore voice as a primary input method for real-time
gameplay.
The Solution
Built a robot combat game controlled entirely through voice commands.
Player issues verbal orders to their robot during battle (move, attack,
defend). Demonstrates interaction design thinking beyond traditional inputs.
Tools
UnityVoice Recognition APIC#
Password Protected
This work is under NDA. Enter password to view.
Password available in resume or upon request.
Burlington Public Sculpture
Role:3D Visualization Artist
Client:Studio Projects
Status:Approved & Built
The Project
Converted artist sketches into photorealistic 3D sculpture visualization
placed in realistic downtown Burlington environment. Visualization was
presented to city council and approved for construction. The sculpture
has since been fabricated and installed.
Visualization
Day visualization
Night visualization
Installed
Tools
Unreal Engine 5MayaSubstance Painter
VictorVictor × Nike
Role:3D Visualization Artist
Client:VictorVictor
Status:In Development
⚠️ Work shown under NDA. Please do not distribute.
The Project
Graphic designer working on a VictorVictor × Nike footwear collaboration
needed to evaluate designs before production. 2D mockups weren't representing
material behavior accurately. Created photorealistic 3D renders showing
materials (leather, mesh, rubber, metallic elements) from multiple angles
for internal design review.