December 2025
From Neuroscience to Neural Networks
How studying brains led me to building AI systems. The path makes more sense than you'd think.
From Neuroscience to Neural Networks
People ask how I went from cognitive science at Harvard to building AI at CompLabs. Path seems indirect but to me it's one continuous thread: understanding how intelligence works, whether in brains or machines.
Starting with the Brain
Came to Harvard fascinated by a simple question: how does the brain do what it does? Three pounds of tissue generating consciousness, learning languages, recognizing faces, writing symphonies. Wild.
My research focused on computational models of cognition. Trying to build mathematical descriptions of how the brain processes information. Worked on neural signal processing, studying how electrical activity patterns encode thoughts and intentions.
This led me to brain-computer interfaces at Blackrock Neurotech. Literally reading thoughts from neural activity. Watching a paralyzed patient move a robotic arm with their thoughts showed me both how far we've come and how much we don't understand.
The Deep Learning Connection
Here's what surprised me: the more I studied biological neural networks, the more I appreciated artificial ones.
Architectures are different. Biological neurons way more complex than artificial ones. Learning rules are different. Backprop doesn't happen in the brain (probably). Substrates are different. Silicon vs carbon.
But the principles rhyme.
Both systems learn by adjusting connection strengths based on experience. Both develop hierarchical representations, simple features combining into complex concepts. Both exhibit emergent behaviors that weren't explicitly programmed.
Studying one illuminates the other. Neuroscience gives intuitions about what computations are possible and efficient. AI gives tools to test theories about how the brain might work.
What Cognitive Science Teaches AI Builders
Few lessons I think every AI builder should know:
Representation Matters
Brain doesn't store raw sensory data. Transforms inputs into representations optimized for tasks. Visual cortex doesn't store pixels. Stores edges, textures, objects, scenes.
Same in AI. Choice of representation often more important than choice of algorithm. How you encode your problem determines what solutions are easy to find.
Attention is Computation
Brain can't process everything at once. Attention selects what to process deeply and what to ignore. Not a bug. A feature. What makes real-time cognition possible.
Transformers with their attention mechanisms showed how powerful this is. Still learning how to implement attention efficiently.
Learning Never Stops
Brain constantly learning, constantly updating models of the world. No clear separation between training and inference. Every experience is both test and lesson.
Most AI systems have sharp distinction between training and deployment. Bridging this gap is one of the field's biggest open challenges.
Embodiment Shapes Intelligence
Human cognition evolved in bodies moving through physical environments. Concepts grounded in sensorimotor experience. Understand "up" because we fight gravity. Understand "warm" because we have temperature sensors.
AI trained only on text missing this grounding. Can manipulate symbols but may not truly understand what they mean. Part of why I'm excited about simulation. Way to give AI something like embodied experience.
Why I'm Building What I'm Building
CompLabs sits at the intersection of everything I've learned. Using neural networks to simulate physical systems in ways that augment human capabilities.
Not the path I planned. But looking back every step prepared me for this one.
Brain still the most sophisticated information processing system we know of. But we're getting better at building artificial systems that capture some of its magic.
Always happy to talk about neuroscience and AI intersections. If you're working in this space, let's connect.