infrastructure

Building the Infrastructure to Support a World Powered By AI

In Technology by Daniel NewmanLeave a Comment

infrastructure

Sure, we think about Siri and Alexa when we think about the way AI has infiltrated our daily lives. That is because these are the “everyday” use cases that the majority of the population have come to know when they think “Artificial Intelligence.” However, on a much deeper level, businesses, cities and governments are all planning for a future where there will be billions of connected devices throughout the infrastructure. In order to utilize the data to build smarter cities or improve business personalization the data alone will not be enough. The requirement is going to be a seamlessly connected ecosystem that considers all the data, the transmission of the data and the ability to utilize and store the data in the most efficient way possible.

Today at the “Economist Glimpses of an AI Future Forum” in Chicago, I had the chance to listen to Kirk Bresniker from HPE discuss their approach to matching viable AI use cases with the technology environment to support them. This presentation and the subsequent announcements from HPE provided valuable insights about the importance of building not just for AI, but for an ecosystem to support the living workflows where data is created, collected and then utilized. As more and more data continues to be created (and this is exponential), the challenges are going to not only be to maintain satisfactory speeds, but also efficiency. Here were some of the key takeaways from the session as well as my point of view on what companies and municipalities need to be thinking about as it pertains to moving forward with AI.

Predictive Isn’t Enough: A system that can simply predict device failure isn’t enough. We need more intelligent systems that can predict and then act based upon real time data. With performance increases of industrial assets of up to 20 percent coming from AI-enabled maintenance, industrial companies should be flocking to a solution that can prescriptively maintain assets.

Get Industry Specific: While the last bullet focused on Industrial Assets, there are use cases for AI in every industry. It’s important to consider AI and understand the specific use cases that could benefit your industry. This starts by firmly understanding what data you have available and how that data can be used to create a tailored plan to accelerate the utilization of AI to achieve business specific results.

Infrastructure Matters, A lot: Deep Learning and Machine Learning application development will perform best when run on purpose built equipment. In the case of HPE, they have developed a use based solution called the Apollo 6500 Gen10. This hardware is built to handle the complex process required to train a deep learning model. The initial specs on this particular unit show the ability to train a model 3x faster than previous generations of this hardware and more than 10x faster data-sharing rates. While the specific hardware announced by HPE is purpose built, the big key for me here is that not all compute is equal when it comes to AI and deep learning applications, organizations need to keep that in mind.

Know Your Environment: Companies should be actively benchmarking their environments in order to determine the right set of tools. Going back to the industry specific use cases and the critical required infrastructure, a goal of delivering AI powered applications requires a much broader understanding of current assets, services and environment.

My Point of View: What HPE has developed is a 360-degree approach that organizations can understand the broader considerations of an AI enabled future. The fact that there has been substantial growth in machine learning applications has propelled AI into the mainstream, but with experience, quality, speed and efficiency all being critical to successful AI adoption, companies are going to need to have a strategy from core to edge to support the proliferation of AI. Perhaps what I found most interesting is that HPE is moving infrastructure capabilities into their Pointnext service portfolio starting with the move from predictive to full turnkey management of industrial devices. The evolution I foresee will be movement within the services portfolio to offer the required infrastructure on a consumption basis so more companies can benefit from the data growth, in particular at the edge to deliver near real time actionable insights.

Daniel Newman is the principal analyst at Futurum Research. Follow him on Twitter @danielnewmanUV. Futurum Research, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the tech and digital industries. The firm doesn’t hold any equity positions with any companies cited.

This post was originally published at Futurum.

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment