Large Scale Hybrid Edge-Cloud AI for Industrial Applications
Introduction
With Hybrid Edge-Cloud AI, flexible, large-scale industrial artificial intelligence applications are now possible. This solution allows you to take advantage of the benefits of cloud computing and edge AI computing, enabling data access, cleansing, transformation, real-time estimates, predictions, process, and product optimization anywhere, anytime. With the IntelliDynamics(R) Intellect system, you can now create and deploy models and optimizers in both on-premises and cloud environments. This architecture allows you to scale your operations effortlessly, enabling optimized product performance while reducing costs. With its powerful capabilities, this solution promises businesses a more efficient and productive industrial AI experience.
In the Beginning, Was the Data, and It was Good
Neatly filed away in data historians, SQL databases, text files, Excel workbooks, and streaming inside of Distributed Control Systems (DCSs), data tells a story of what has happened in production. Process upsets, valve changes, what caused product performance (both good and bad), who added which materials when and mixed them, using what equipment, rates, recipes, lab results, viscosities, process data, water cuts, tank levels, sensor data, and much more. The data isn’t particularly good in its raw form, but it tells the story.
Most industrial data is time series, which means it is recorded through time as things happen. As adds were made, when valves were changed, the lab recorded the results (or hopefully when the sample was taken). Secondarily, data is often recorded by lot or batch, which too are recorded through time.
Data Cleaning
To make sense of this myriad of facts, one must extract and “synchronize” the data by batch and time… align it so materials, process, lab, and other data are properly related.
Then things get tricky. The data is often “messy”, but “messy” is a relative term. Generally, one wants to remove the “outliers”, the bad sensor readings, data losses, entry errors, etc. However, if you are looking for the causes of these bad data things, you’ll want to keep the “bad” data and maybe drop some of the “normal” data, which falls in acceptable “normal” operating ranges. Data cleansing, you see, depends on what you are after.
Visualization
Once you have your data prepared as you like it, a large variety of tools are available to look at your data in trend plots, scatter plots, bar charts, radar diagrams, etc. to provide real time insights.
Modeling
Here comes AI technology, such as neural networks, machine learning, deep learning, clustering algorithms, and various models. The benefit here is you can make estimates (current time) and predictions (future time) using intelligence technologies. You can see the key drivers of the results you have modeled through sensitivity analysis, which shows how input factors drive the outputs of your models. For example, it will answer the question: What key process conditions drive your product performance metrics? You can estimate the likelihood of a process or product fault for predictive maintenance.
Estimation and Prediction in Real Time
Once you have a good, validated model, you’ll want to put it online, making estimates and predictions in real time. This is where desktop tools fall short. Writing and creating models manually isn’t particularly useful going forward. It would be best if you had server software that takes all that you have done above and puts it online, getting current operating conditions from the DCS, materials adds from the materials database, and quality test results from the lab automation system. This is where the IntelliDynamics(R) Intellect system comes in. It runs either in the cloud or on-premises, or even both.
Optimization in Real Time
Intellect’s model-based optimizers “invert” models, taking in current operating conditions and determining setpoints to set the process to achieve desired results, such as maximum production, target product quality, and reduce expensive materials. These setpoints are time critical and are written to a DCS to control processes. In these cases, Intellect is installed on a computer on the control network… this is most certainly on-premises, in essence, “edge computing”.
Hybrid Edge / Cloud Architecture
Since Intellect is architected as separate executables that communicate over the network, components of just one Intellect system can be placed anywhere… on-premises, in the cloud, or both. In a recent customer implementation in the Middle East, they wanted the main Intellect system in the cloud (no problem, Azure worked well for this). We needed data interfaces to run on-premises to connect to their data historian, ok, a bit of “edge” works just fine. We just needed to securely “see” the data access component’s IP address and port, and we were off and running.
Large Scale Hybrid Edge / Cloud Operations
Controlling (optimizing) about 50 or 100 unit operations with one Intellect system is fine. Still, if you are interested in 1000’s of unit operations, such as in multiple oil and gas fields, platforms, and central processing facilities, all simultaneously, a larger scale solution is needed. No problem. Just implement a handful of Intellect systems. Some can operate in the cloud if the business so desires, as a service, and no control is being done, others can run in the on-premises data center at a central location, and some can run out on an oil and gas platform, particularly if you are controlling the processes as described in “Optimization in Real Time” above.
Edge solutions? No problem. Cloud computing? No problem. Need both cloud and edge? No problem. Blend, mix and match as you please. A hybrid cloud and edge computing are easily done to lower costs, better insights, better performance, intelligent control, and make real time decisions.