IBM is looking to offer big data number crunchers and data scientists a way to add more machine learning and deep learning technology into the analytics process by combining two of the company's platforms developed specifically for artificial intelligence and scientific research.
On Tuesday, two IBM executives, Dinesh Nirmal, vice president for analytics development and site exec, and Sumit Gupta, vice president for HPC, AI and machine learning, are publishing a blog post detailing the offering, which combines the company's Data Science Experience and its PowerAI platforms
Power AI is Big Blue's distribution platform for machine learning and artificial intelligence that runs on the company's Power server systems. (See IBM Software Helps Speed Up Deep Learning.)
The Data Science Experience is an integrated developer environment that IBM first introduced for the public cloud in 2016. Earlier this year, the company rolled out a version for private cloud distribution. (See IBM Brings Data Science Experience to Private Cloud.)
Forget doing analytics the old way.
Taken together, the two integrated platforms look to bring additional layers of machine learning and deep learning into the big data analytics process. It also allows data scientists a greater ability to train AI and neural networks to automate some of the tasks associated with such expansive number-crunching, while producing faster and much more accurate results.
"Thanks to more powerful systems and graphical processing units (GPUs), we are able to train complex AI models that enable these insights," Nirmal and Gupta write in the October 10 post.
Specifically, IBM has been touting the use of GPUs as part of the machine learning and AI process for some months now. Since GPUs use hundreds or thousands of parallel cores, the chips allow these deep learning networks to develop information faster and accelerate the training of neural networks.
As part of this blog post, the IBM researchers noted that this combined platform will take advantage of Nvidia's NVLink, a high-speed interconnect between CPUs and GPUs that offers what the company promises is 2.5 faster speeds compared to PCI-Express 3.0
Keep up with the latest enterprise cloud news and insights. Sign up for the weekly Enterprise Cloud News newsletter.
The integrated platform will also let data scientists and researchers take advantage of deep learning networks such as TensorFlow, a machine learning workflow developed by Google. IBM has also signaled support for other frameworks, such as the open source Caffe.
By offering all these technologies combined together, IBM is looking to give data scientists, as well as the enterprises that employ them, greater insights into what customers may want.
Nirmal and Gupta use the example of a bank taking advantage of the technology to determine if a customer may default on a loan, or might be willing to invest money in different accounts. Another example is predicting equipment failures in the manufacturing process.
"These learning models continuously evolve and get smarter over time, and with it, become more sophisticated at identifying failures," the two write in the blog post.
— Scott Ferguson, Editor, Enterprise Cloud News. Follow him on Twitter @sferguson_LR.