GPU-enabled EC2 instance for deep learning

As a team which builds data-powered software for ecommerce-focussed companies, we are always on the lookout for incorporating new…

Semantics3    2 mins

GPU-enabled EC2 instance for deep learning

As a team which builds data-powered software for ecommerce-focussed companies, we are always on the lookout for incorporating new development practices into our workflow.

One such advancement over the years has been the widespread adoption of GPUs for efficient deep learning.

Given the widespread use of Amazon Web Services (AWS) for our infrastructure needs, it was natural that we start off with the GPU EC2 instances that AWS has been pushing in the recent years.

Despite there being many posts on how to setup a GPU instance for deep learning, there was just too much diverse information across the posts that it became a pain to read through the details every time. So with that said, here is my yet another post on how to setup an EC2 instance with GPU support for deep learning.

Instance specifics

This is a very opinionated setup guide, specific to AWS g2 instances (with NVIDIA cards), running Ubuntu 14.04 LTS.

The script installs Cuda Toolkit 7.5 and cuDNN 5.1 — the latest stable releases at the time of writing this post.

TL;DR — run this

note: the cuDNN download requires a NVIDIA developer account. Sign-up for a free account and download the cuDNN v5.1 Library for Linux file here.

Verify with TensorFlow

If you want to verify that your setup is working correctly, here is another short (also opinionated) script to test TensorFlow installed via Miniconda.

Update: To use your newly configured GPU instance on EC2, refer to our post about maintaining persistence across spot instances to optimize for both cost and continuity.

Written by Ramanan Balakrishnan and the Semantics3 Team in Bengaluru, Singapore, and San Francisco

Published at: September 24, 2016

← Read other posts