Next Generation Machine Learning Platform
Princeton University, Department of Neuroscience
Accelerating Science Research
The Neuroscience Institute at Princeton University is home to the Pillow Lab, a computational neuroscience and statistical machine learning group. The lab creates statistical models and methods for characterizing how neural populations encode, decode, and process information in different brain areas. Research includes topics such as approximate Bayesian inference, high-dimensional point processes, and unsupervised deep latent variable models.
Daniel Greenidge works at the lab as a full-time research assistant. He creates custom machine learning models that extract structure from neural data.
- Pillow Lab: Neural Coding and Computation Group
- Focus on neural systems and behavior
Researchers at the Pillow Lab worked on a single development server, shared across the entire department, and deployed to one on-prem GPU cluster, which was shared across the entire university. Experiments submitted entered a queue and training on large datasets took a day or more to get results. The process was slow and they were unable to iterate code quickly to fix bugs and improve on model performance. The slow speed of development led researchers searching for a more powerful and flexible option for training models.
The models the Pillow Lab team were working on required extensive computing power and the departmental resources were not sufficient. If the lab needed to train hundreds of models then the ideal on-prem hardware for them would be really expensive and the study would take a very long time.
Working with the department’s on-prem GPUs held up projects, and the hardware wasn’t always reliable. They were looking for a transparent and well-designed machine learning platform that would allow them to spin up clusters quickly and bring about results faster.
Spell provided them with a cloud solution, which saved time in both the infrastructure and environment setup. Using Spell they scaled experimentation and were able to access additional computing power when they need it. Most importantly, everything flowed together. The transition to Spell was easy; they didn’t experience any downtime or disruption with their current workflow.
“The big advantage of Spell is that I can develop cheaply with a single instance and when I need to run a huge amount of compute I can scale up to that, run it, get the answers back at the same time it would take to train one model,” Daniel said.
Spell was built by engineers for engineers. The API design and core functionality of Spell was in-line with their search criteria; flexibility and good design were a big factor as they made a decision. Using Spell also meant that they had access to a Spell’s Support Team. Engineers at Spell were readily accessible to answer any questions that they had.
“Really good science needs quite a bit of compute, and Spell helps with that,”
In academia, speed matters. Spell continues to help the lab get to results faster and work quickly and iteratively. Researchers are accelerating model development and publishing results at a faster rate. The biggest value of Spell, for the Pillow Lab, is the continuous acceleration of results and using a solution that is extremely cost effective.
Elasticity and flexible cloud compute is enabling the group to scale resources based on the needs of a project. They are are able to train hundreds of models simultaneously. “Really good science needs quite a bit of compute, and Spell helps with that,” Daniel said.
Spell is making it easy to run computationally intensive code. Powerful compute resources are available 24/7, resources that the lab wouldn’t have otherwise. This is including ongoing engagement with Spell’s machine learning Support Team. On Spell support Daniel said, “I’m very grateful and very impressed. I have never had this level of engagement.”
Machine Learning Projects with Spell
Request a Demo
Schedule an in-depth demonstration with a Spell representative to learn how Spell can help streamline and accelerate your machine learning development.