“Every new threat creates a new recipe or modifies an existing one.”
What do you mean? Give me an example. What about bot detection?
No way without embracing AI. And no alternative besides waiting until AI-driven software applications are made available at some point?
This article is not about an exhausting list of AI recipes for IoT threat defense. It is about an innovative approach to build AI solutions on demand and in time without writing a single line of code. To solve a more fundamental problem for threat defense: time-to-value.
Interested? We are excited to share our viewpoints.
Current ways of adopting AI create more issues than solutions. Whatever choice one makes, the result is waiting:
Embedded AI: Waiting for updates
IoT platforms on the one hand and IT security platforms on the other hand started to provide Embedded AI locked into platform applications. This approach responds to past problems and threats and is limited to a few specialized use cases.
Waiting for the next vendor update to mitigate current threats has proven to be too rigid to keep pace with continuously innovating cyber criminals.
Project AI: Waiting for solutions
Leveraging Project AI to build AI-driven responses on demand, is a flexible strategy to avoid vendor fatigue. But it is far from being a solution to be on par with cyber criminals.
AI platforms pay too much attention to data science work, require a high level of data literacy and ignore that time-to-value is a mission-critical measure for threat defense.
Back to Embedded AI? No.
Time for Project AI 2.0 — Do AI like cooking.
In this article, we introduce “AI cooking”, an innovative approach to enable small teams to build AI solutions on demand at lightning speed. With the right pre-built ingredients and the right configurable recipes.
Project AI 2.0 is code-free, template-based and ships with fast operationalization & deployment of AI models and solutions. But wait. It is neither a holy grail nor a silver bullet for every data-centric problem.
We always introduce “AI cooking” as part of a holistic data strategy:
The right technology for the right phase of the standard data process to give data and algorithms the right context to make AI fast, reusable and successful.
Still interested? We are excited to continue. Back to the roots.
Why is IoT threat defense an issue?
The IoT is the new sensing paradigm for interacting with the physical world. It is one of the fastest growing areas of computing, and the number of IoT devices will surpass 25 billion in 2021, according to Gartner.
The majority of IoT devices is not visible to traditional IT security solutions.
IoT devices are dedicated to specific tasks with high priority on reducing power, memory, and storage capacity. So, it is impossible to deploy agent software to gather events and provide a proper forensic trail to identify when IoT devices have been compromised.
According to Armis, this affects more than 90 percent of enterprise devices, by the year 2021.
Cyber actors actively search for and compromise these devices for use as gates into the enterprise network, and to reach out physical objects for malicious control and operations.
More and more IoT devices exchange increasing amounts of data with minimal human intervention and no security solution can listen what they are talking about. A real problem, right?
Do upgraded IT security solutions make sense?
What is the minimal upgrade to support a holistic threat management?
IT security solutions need to integrate an enterprise-scale IoT device management to create a solid basis for IoT device monitoring and threat detection. Next is behavior analytics to handle “un-agentable” enterprise devices. This topic has gained recent attention in research, and all vectors point to AI to detect suspicious behavior of IoT devices.
Suppose, we focus on those (few) IT security solutions that have AI-driven endpoint and network behavior analytics integrated already. Then this must be extended to sensor readings and aggregated real-time IoT events to detect indicators of malicious control and operations as well.
The rising of IoT threats will push vendors of IT security solutions to upgrade their platforms.
For those, who have to set up valuable and secure IoT infrastructures, however, it is far from being clear whether the procurement of these upgraded IT security solutions makes sense.
First, an enterprise-scale IoT device management is an inherent part of every state-of-the-art IoT platform. And it is able to aggregate sensors and devices to digital twins of even more complex business objects.
As a result, IoT platforms and upgraded IT security platforms define two independent sources of truth for thousands of enterprise devices. But a single source of misconceptions, conflicts and extra efforts.
Second, behavior analytics is also relevant for many IoT use cases (e.g. for predictive maintenance to detect indicators of upcoming production downtime), and vendors of IoT platform already started to upgrade their products as well.
Regardless of which platform is considered, putting AI into the hands of their vendors results in Embedded AI. And this kind of AI offering is too rigid to keep pace with continuously innovating cyber criminals.
We observe overlapping functional domains and AI support locked into platform applications.
From a business perspective, setting up IoT infrastructures with two conflicting and too rigid building blocks is expensive and does not make any economic sense. Do you agree?
What approach makes sense?
Why not use purpose-built platforms for the purpose they were made for? Leave device management where it is and consider analytics as a task for AI platforms.
From a data-centric perspective, future IoT infrastructures with an efficient threat defense integrated, must be the result of a best-of-breed approach with three building blocks, organized along the phases of the standard data process:
- IoT platforms to manage sensors and IoT devices along the process phases track, collect, aggregate and actuate.
- IT security platforms to manage IT endpoints and networks along the phases track, collect, aggregate and actuate.
- AI platforms that leverage purpose-built platforms as data sources and destinations and support the process phases analyze and optimize.
This approach avoids the integration of conflicting purpose-built platforms. But there is another really important benefit:
IoT platforms and IT security platforms contextualize sensor readings and aggregated real-time IoT events, endpoint events, network traffic, and 3rd party threat intelligence data.
Project AI benefits from contextualized data from the very beginning. This is an important first step on the way to accelerate and facilitate the building of AI solutions on demand.
What is proposed so far? Think data-centric and deploy the right technology for the right phase of the standard data process.
What else? Advanced analytics is a cross-sectional task and contextualized input data are very helpful to fasten AI projects.
What is the right AI platform?
Data contextualization is important. However, there must be more steps on the way to significantly reduce time-to-value. We define time-to-value as the period of time that passes from identifying business problems or threats up to operationalization & deployment of AI models and solutions.
Todays’ AI platforms respond to the widespread assumption, that AI solutions always need demanding data science work and solve unique problems. This is the root cause why enterprises complain about missing data science expertise, budget and more.
And, from an operational point of view, these platforms consider time-to-value as an afterthought. This is a problem, because its value makes the difference between mitigating threats on the one hand, and compromise and damage on the other hand.
Instead of diving into endless discussions which neural network wins which intellectual beauty contest, the entire process how AI solutions are built must be moved into the spotlight.
We need a new type of an AI platform: with support of a fast & reliable process, suited to move small DataOps teams at lightning speed, and, flexible enough to respond to evolving business cases and threats in time.
What are you doing? Are you trying to convince me that I should follow your proposed best-of-breed approach that favors AI platforms as important building blocks which are not prepared to support efficient threat defense?
We suggest shifting the focus from a functional to a data-centric perspective when it comes to transform data into insights. And AI is an indispensable part of this process.
In addition, we point to the fundamental problem, that current AI platforms are too cumbersome. But this is not the end of this article.
Just continue reading. Below we describe how to solve this fundamental problem. And, when this approach is convincing, then you certainly appreciate that we made PredictiveWorks.
Do AI like cooking
Based on the right AI platform, Project AI can be developed to be like cooking: with the right pre-built ingredients and right recipes, a fast track to operationalization and deployment, and the flexibility to generate own recipes.
There are not that many different algorithms and methods for data preparation. Starting from contextualized data facilitates data processing even more.
Connectors to data sources and destinations such as IoT platforms and IT security solutions can be pre-built and made configurable. Unified analytics engines such as Apache Spark prepare grounds that this also holds for the full spectrum of data operators:
From business rules and structured queries to machine and deep learning and to natural language and time series processing.
Based on big data workflow engines such as Google CDAP, DataOps teams then use these ingredients to configure and orchestrate AI workflows without writing a single line of code.
In this new type of AI environment, workflows are logical plans that define which ingredients should be used and how they should be arranged and organized.
They represent machine readable instructions to teach a code generator how to transform a structured collection of data connectors and operators into an executable AI binary.
AI workflows and its associated binaries are made to create data products. And as is with AI algorithms, there are not that many different products:
When we want to shed some light onto the data region of the “known unknowns”, classifications are favored means to learn from knowns and detect unknowns with similar features.
When pushing forward into the region of the “unknown unknown”, where previous knowledge does not exist, anomalies have proven to be a prominent data product for threat detection.
This is not an exhausting list of all the available data products. With a limited set of data products, there is a huge potential to reuse and customize AI workflows.
Reminder: Data are unique. But the way how they are transformed into anomalies, classifications, forecasts or any other data product is not.
Real-time readings from IoT devices and network traffic events are definitely different data. When it comes to detect anomalous readings or events, however, the respective data workflows are very similar. And in some business cases, it is just replacing the data connectors.
Reusability of AI workflows is often underestimated. In combination with pre-built ingredients and an efficient code generator, it is a solid basis for fast AI cooking.
No recipe of a 5-course menu starts with the description how to prepare the side dishes of the main course. And building an AI solution does not start with the question which workflows to orchestrate and which data connectors and operators to select and configure.
The beginning is marked by a business problem or question.
So, when we talk about AI recipes, we take a holistic view and define end-to-end solution templates covering business problem and drivers, to business tasks and finally down to technical AI data workflows and ingredients.
AI contextualization does not end with the provisioning of contextual information to enrich data for AI operations. There is no logic in providing context to data while ignoring to provide context to the operations that turn data into insights.
Build your own recipes
Existing AI recipes can be used to detect known threats or customized to face evolving ones. And situations must be mastered where security researcher identified new data strategies for completely new threats and no appropriate template exists at all.
What do we get? How do these research results look like? Whatever threats these results address, they comprise data workflows and can be aggregated and mapped onto a new AI recipe:
Maybe with a new combination of AI ingredients compared to existing workflows. And maybe the resulting AI recipe contains a mix of data products that is not covered by available recipes yet. It is no magic. It just needs the ability to create your own AI recipes based on a wide variety of proven AI ingredients.
Much like in a restaurant which offers 3-course and 4-course menus on a regular basis and then decides to put a 5-course menu onto the carte.
Predict as you train
AI solutions show up at least as 2-course menus with training & prediction courses. Other than in real restaurants, both courses are very similar and have many stages in common.
The prominent difference is that training courses create and update AI models, and prediction courses use them to generate insights. Why not leverage the same technology & platform to share stages, and an AI model management that can be seamlessly used by both courses?
Whenever a certain AI model is retrained, the resulting version is immediately available in production. This is the fastest track from training to production — predict as you train.
PredictiveWorks. is a new type of an agile AI business platform and it is made to do AI like cooking. At the heart is a hub of AI recipes (templates), organized as a marketplace with a search & recommendation engine to find AI solution templates that fit a certain business problem and threat.
The template market is complemented by an AI Catalyst that transforms selected templates into executable AI solutions. The AI Catalyst is a code-free solution builder, made to move DataOps teams at lightning speed.
IoT Threat Defense
IoT devices have limited power, memory and storage capacity. Agent software cannot be unrolled to manage these devices with traditional IT security solutions. Fast growing numbers of IoT devices are equivalent to fast growing attack surfaces.
From a business perspective, it does not make any economic sense to just upgrade IT security solutions. It is important to take a holistic view:
Use IoT platforms and IT security platforms for the purpose they were made for and consider AI platforms as another inherent building block of valuable and secure IoT infrastructures.
Organize these three platforms along the phases of the standard data process. The aim is to deploy the right technology for the right phase to give data and algorithms the right context to make AI fast, reusable and successful.
Traditional Project AI is not prepared to enable enterprises to keep pace with continuously innovating cyber criminals. And without significantly reducing the time-to-value there is no efficient IoT threat management.
Backed by a wide variety of proven AI ingredients, template or recipe-based Project AI 2.0 is like cooking à la carte: It is a reliable and fast process to build AI-driven responses on demand to continuously evolving cyberattacks in time. The implementation of the formula
“Every new threat creates a new recipe or modifies an existing one.”
introduces a new digital era for fast IoT threat defense.
Originally published at https://www.linkedin.com.