Google Cloud is looking to accelerate batch computing, machine learning and other high-throughput workloads with what the company calls "preemptible" Nvidia GPUs that cloud customers can use in 24-hour periods.
In addition, Google (Nasdaq: GOOG) is cutting the prices of these GPU resources as part of the beta release, according to a January 4 blog post.
Google is offering these preemptible GPUs to its cloud customers for a maximum time period of 24 hours, although Google warns that its Compute Engine can shut these processors down with only a 30-second notice.
This allows Google to offers the maximum amount of computing power to these types of workloads, while keeping the price down. To start, the company is offering Nvidia Corp. (Nasdaq: NVDA) K80 GPUs for $0.22 per hour and P100 GPUs for $0.73 per hour.
"This is 50% cheaper than GPUs attached to on-demand instances, which we also recently lowered," Google Cloud product managers Chris Kleban and Michael Basilyan wrote in the blog post. "Preemptible GPUs will be a particularly good fit for large-scale machine learning and other computational batch workloads as customers can harness the power of GPUs to run distributed batch workloads at predictably affordable prices."
These preemptible GPUs follow Google's introduction of preemptible virtual machines (VMs) that offered powerful, but short-lived bursts of compute instances to help with machine learning, batch computing, research and other intensive workloads.
Keep up with the latest enterprise cloud news and insights. Sign up for the weekly Enterprise Cloud News newsletter.
Last year, Google added local solid state drives (SSDs) to these preemptible VMs to allow high-performance storage for these workloads.
Over the past several years, Google has used its Cloud Platform to expand the company's investments in machine learning and other technologies under the article intelligence umbrella. Enterprise Cloud News editor Mitch Wagner recently wrote about Google's plan for video analytics and other machine learning ambitions. (See Google & Amazon Heat Up Machine Learning Rivalry.)
Google also developed what it calls Tensor Processing Unit (TPU) chips to accelerate machine learning within the cloud. (See Google's TPU Chips Beef Up Machine Learning.)
— Scott Ferguson, Editor, Enterprise Cloud News. Follow him on Twitter @sferguson_LR.