Explaining o9s highly differentiated Enterprise Knowledge Graph (EKG) - o9 Solutions (2024)

This is Hamesh from online solutions. How I wanted to present the flow for this session would be a quick overview of the platform, but I know this is something you may have seen already, but it'll be a quick primer, and then how the EKG fits into the platform context. Then we can talk a bit about reference applications because this is, relevant to the discussions. The data models that are available would be slightly different from a data model for the CPG company because industry flavor for those data models are important.

And we'll talk a bit about how those are set up in the platform. And then finally, we'll get into the details of the EKG, where I'll try to intersperse details of the platform itself. So with that, let me start from the first section o9 is knowledge powered analytics planning and learning platform. So there are a collection of elements that come together to to create the platform itself.

And those are really the layers that we'll start peeling as we go through the next slide. So this slide is, it's a busy slide. But we can start peeling the layers here. So I'd like to start from the bottom, which is really the first section that we briefly touched upon around data.

So data is going to be central to the team out here where you have a number of data elements coming into your planning systems.

These are the traditional data sources which are more structured from your data warehouses and ERPs and CRM systems. But now more and more, we're seeing a lot of these real time data feeds coming in from your IoT sources or your edge APIs. So these could be from your smart containers or your equipment that is keeping track of the predictive maintenance cycles and some of the automation that you're building into your equipment. So those are the kinds of data streams that come in.

Obviously, there are other data sources from your planning, like spreadsheets or r points that you may be accessing today. Or image processing. If you're capturing images of certain elements in your system, then those could be embedded into the platform. So think of any data source, both structured, and outstructured data sources coming in.

And what is the key requirement for that is really that there needs to be a storage mechanism to store it. And process it. Or and more, we're seeing internal data streams are becoming relevant. This could be the market knowledge stream where you're getting data about your external market, for example, competitors.

If competitor has a long cycle or competitor has an event or a promotion with specific customer, those are elements that you'd like to understand, but they could also be macroeconomic indicators. For example, GDP growth might be relevant.

Or clear weather events at ports that might be impacting your ability to to fulfill or restrictions that might be imposed due to port closures. They are relevant because they are going to be used in forecasting. How do we improve ability for you to forecast is really through including these market leading indicators, which will help us use our AI ML algorithms to them provide a better forecast rather than just looking at your historical data, which is your shipments and, selling data. You would be looking at leading indicators in terms of some of these drivers in the future.

So this gives us a framework to actually start looking at how do we improve the forecasting models. Similarly, on the supply chain knowledge, there could be a lot of data coming in about impact or issues that suppliers are having in their supply locations. Those are things that can be tracked, but the way that is being leveraged now is how do we connect all of those dots? Which is where the enterprise knowledge graph comes into play because it's really taking all the data that you're providing and trying to take a model, enterprise in a way that it can process the information and convert that information into knowledge.

So, really, it is how we're thinking about the human brain, where we observe and convert those observations into memory. Similarly, the EKG is really trying to do the same thing. An example of this would be I may not remember what restaurants I had eaten over the last couple of weeks, but I might remember the ones that tasted the best or tasted the worst. Right?

So it's really the brain is abstracting those and putting them into the knowledge that we have. Similarly here, an example could be customers that have historically been ordering. I would like to understand which of them are reliable, which are not. Customer orders coming through your transactions, But converting them into knowledge where I could say that customer a ordering is more reliable compared to customer b.

So I know that the chances that customer b orders might get canceled or pushed out is high. So when it comes to decision making, I need to make a decision on whether to expedite an order at an additional cost, the system would really recommend that customer be the given that the chances of cancellation are high, It may not be a recommended option, so you might be better off waiting rather than expediting that order. That would be a small example of how we would take data converted into knowledge and help us with our decision support as part of the process. All of those come together through the analytics and planning framework.

So really, what the EKG provides is the framework for us to store and persist these models. But how do we derive some insights from that for planning? Which is really this next layer where the EKG is powering a lot of the analytics that the system can do. For example, on the supply chain side, your cost to serve analytics, or how do I optimize the in policies that I use, what should be the optimal sourcing locations for some of these or the lead times That would be part of supply chain analytics.

And inventory, obviously, or multi echelon, what is my safety stock levels I should carry across my echelons in the supply chain and so on. Similarly, you have your demand analytics and commercial or revenue analytics, around your product portfolios and your assortment optimizations and so on. All these are, again, feeding us insights. So it's taking the knowledge from your enterprise knowledge graph, converting them into insights, and then those are being used by our planning processes.

Now here, we have a breadth of planning process is going all the way from your integrated business planning. Your IBP processes are on SNOP, etcetera, to, revenue management is where a lot of the commercial planning and, forecasting, demand planning, and budgeting, your product life cycle management, or your opportunity pipeline management, etcetera, coming play. And then on the supply side, you have your regular, your master planning, your configure to order workflows, or e two order workflows, your distribution planning, etcetera. The power of the platform is really around taking the information and plans that you're creating and being able to easily collaborate and and share those with partners.

So how do you increase your engagement with customers? For example, where you have your b to b customers or your dealers and distributors whom you'd like to be in sync with in terms of what are the ordering cycles what are the joint business plans, the ability for us to increase our sales, your revenue plans, your marketing initiatives. All of them can be pulled together as part of the platform itself now. Similarly, you would have engagement with your suppliers, your raw material collaborations, or your equipment collaborations, your spare parts, where you'll be collaborating with tier one, tier two, suppliers, and being able to have them on the same So they are seeing the information in real time without you having to print out papers or send them emails.

They can actually log in to the same platform They'll have their own credentials, log in and do real time collaboration. The other element to this is it's not just about planning. You also want to connect to operations.

And in today's connected world, really, the operations, the folks that are executing, there's a bunch of operational, elements to it from to operations, to procurement operations, to to your marketing, account sales, etcetera.

Those are those are folks that have needs to get information and data or insights from the from the platform so that they can make decisions and help drive some of the demand shaping activities with their customers or some of the factory activities that happen where they can provide information if there are specific resources that are going to be down or specific capacity issues, those elements that can come back from the the operations team back into the platform in real time. And then those can be considered as part of the planning. So that was a quick overview of the platform into various elements and how these are tying together, part of the platform and some of the high value capabilities.

That I did talk a bit about is, primarily around the real time market knowledge. How are those coming into the system and then a bit about the demand planning itself where we're moving more towards the AI ML power forecasting and analytics.

And then obviously synchronizing the plans, right, being able to do what ifs, where I can take the supply chain scenarios or demand scenarios. For example, If a customer, I get a demand upside for a new order from a customer, does supply have the ability to meet that demand? Am I able to support that? Being able to respond quickly. In real time, we'll talk a bit about some of those capabilities as part of the platform.

And then, obviously, digital collaboration, right, where we're talking about customer engagement and supplier engagements.

And finally, it's around learning. Right? So the the platform, the five key things that I just talked about where the platform is taking all the information, but learning from past actions, especially from data There's post game analysis where you can look at what's the plan versus actuals and why. What are the drivers behind some of the the root causes and how do the models, refine themselves so that they're taking the latest and greatest information and converting them into knowledge.

So those will be the four or five key things that that the platform is trying to enable. And in the next sections, we'll try to get into some more details. So the area of focus is this section. So let me just highlight that for you here.

So what we're gonna talk a bit more about is the EKG itself.

What is the modeling framework behind it and and how it is the backbone of everything that we do in the platform. So let me just move screen. So I'm just zooming into the EKG itself, and here again, think of it as the human, as the brain, where it is actually evolving set of entities typically in traditional systems, these are somewhat fixed. They are restricted in the sense you have a fixed schema.

That's very hard to change in in technical terms. Whereas, the way we are thinking about it is that it is evolving, especially with business changing where you need to add, new nodes to your system and It's taking all of the information and trying to extend and and make it easy for you to add new nodes into a supply chain. So what does this really mean? Maybe let me just do a quick summary here.

What we're talking about is how do you model your enterprise itself. Right? The digital twin of your enterprise so that it allows us to convert the data coming in into knowledge. Here, you'll you'll see that, there are different types of relationships, the hierarchy relationships, and the network relationships, And the reason they are required is there are different needs.

There are certain needs for demand planning and market intelligence where you're looking at data, from a different lens compared to how you're looking at your supply chain, right, which requires more of a graph relationship.

And those are the key things that come together when we talk about the graph queue. It's really around taking relational databases and taking the old app elements from there, where you can look at hierarchies, up and down the hierarchies, but also adding the graph context to that, where you're looking at now more of the ability to model supply chains. This can be done in can be force fitted into other, data models as well. But it is not very efficient.

And here what we're trying to highlight is the fact that we come up with a more efficient way for you to model and enterprise both hierarchical data as well as network data. And the section above is really around taking the data coming in and creating these key supply chain knowledge models. So there's supply chain knowledge models around your various plans that you have. Around supply intelligence, your procurement plans, etcetera.

And then the demand knowledge models, which is, focus a bit more on the demand side around pricing, your demand forecasting, your initiative plans, etcetera.

And then the market knowledge models, which takes a lot of the external data feeds in. It could be all the way from your channel inventory, point of sale, market size, market share, etcetera. All of those models coming together so that we can do run insights and and start creating better decisions for the enterprise.

What are some of the key tenets of the digital brain and the EKG There are four or five here that we've listed. One is the notion that it does need a tightly coupled big data store. And the reason for that is there's a lot of data coming in, and there has to be a seamless integration for me to store transaction level data, especially from your sensors for example, your smart hubs or smart containers that are sending data or your machine equipment that's sending data or your your plans, the resources that are, sending IoT level data, those would still need to start getting captured. And those

transactions really needs to be in a big data store, and which is where the platform natively integrates with big data stores for us to store that level of information. And then the second piece we touched upon briefly, which is around the need to have OLED structures, which is, really for navigation. Right? I'm looking at aggregate level plans or I'm trying to look at detail level plans.

How do I sync up, aggregate, and detail plans, etcetera? Those are more naturally hierarchical, which is the old app data structures.

But we also have the need to have supply chain data structures, especially for modeling and extended end to end supply chain network going all the way not just from within your enterprise, but extending it to your customers and customers, customers, and then for your locations, for a supplier side going all the way from your supplier locations to the supplier supplier, their supplier locations Right? So that needs the, a combination of the two. The third pieces are on extensibility and flexibility, and this is where the power of some of the graph modeling comes into place. If I have information about a new competitor, so a competitor x that I have never had in my system before, Alright. I never knew about that. An email comes in saying competitor X has a product launch next week, which is related to my product, y category.

Then just having that information come in is able to connect the dots. So I can say my competitor for product x, I have a product note here. A specific product category or product, I now have an additional entity, which is competitor x. So based on those, that link is the created.

Now it can actually notify the product manager who is a product owner, and they can get this information coming in through the enterprise knowledge graph. Now flows to the relevant person that might be interested in this product and what impact it may have on my forecast and my plans for this product. Any more information now starts coming in about competitor x, then you know that it's already part of the system. It's through smart tagging.

We already know that competitor x is related to product, product y. And those two get aligned together anytime any other information flows in about it. So ability to continuously add data in, new products, new items, new competitors, any these elements seamlessly getting processed and getting added in. So you're evolving your brain, but you also have the ability to add new notes.

Didn't not have distributor node. Let's say my my model was mostly, dealers and and b to b customers.

Now I'm adding a new set of distributors that I'd like to sell through. Adding a new distributor note should not mean that downtime for your application and you need to recreate certain hierarchies, reload data, etcetera. It should be as seamless as plug and play. So I can just add a new node connected to the graph, load the data in, and you're ready to go. So it's more like what we're talking about evolving. So you'd like the brain to evolve, self evolve based on data coming in. But easily being able to extend as well, add new nodes and add new intelligence around those nodes.

The fourth one is around APIs, and this is really critical for real time information access. As you know, most of the data is now moving digital where the data is accessible through APIs.

So any of the IoT data or the real time data sensors that are sending you data. Those are accessible through APIs. So the the platform provides a native API framework where any model that's available in in the online platform actually has its own sets of APIs exposed. Then it can use the rest API framework to connect and get data from any of your sources in real time.

The fifth one is our knowledge. Was it just data model. So this is something that we just we've talked about a bit where it's not just about collecting the data, but how do you convert that data into knowledge? So an example of that would be forecasted in orders, but converting that data that coming in.

So you have collected a lot of historical data. What am I doing with that? How do I know, what is my forecast accuracy or bias for each of the locations that I have in my supply chain, which of them have higher accuracy. Those are elements of knowledge that we keep adding as more and more data comes in, it keeps feeding the knowledge models.

So then it keeps building that repository of knowledge and stores that knowledge for every node in the supply chain. That would be the critical piece here is the data coming in into these knowledge models. So then it's used for better decision making, when we get into the planning phase.

Now some of the underlying capabilities that is driving these modeling frameworks and these tenants is really around the ability for us to add ML analytics. So we provide the platform for you to actually, run big data analytics on the big data store, and we have embedded a number of these open source machine learning algorithms.

That allow us to run these analytics on the data sources that we're collecting. The other one is around b s match, which is your demand supply match, So we have a proprietary best in class algorithm, which is really being refined and fine tuned for the data model that we have. So it's able to perform at scale and and examples of scale, meaning Walmart scale, where they have millions of skews and and thousands of store locations.

Because this is, this is what I'm talking about retail. But just to give you a flavor for Starbucks and and Walmart scale, where we're running their entire supply chain network in the same platform. This is where the demand supply match where we have not just simple unconstrained solves, but a lot of functionality in there for constrained solvers with capacity constraints, storage handling constraints.

Then the third pieces are on aggregation and disaggregation, and and this again is part of the old app framework.

Because, you don't want to be storing plans at the lowest level where every time you need to figure out how to purchase data, a lot of data at the leaf level, Whereas the framework that we have where every model can be at a different green, but they are synchronized through the ability to aggregate this aggregate. Right? So I'm able to spread data down to the leaf levels or the lower level where the plan is modeled. But I'm also able to easily aggregate without having to have an additional batch process for aggregation or disaggregation.

So that's really on the vertical side are going from top to bottom, but there's also the bidirectional propagation, which we had talked a bit about, which is any data coming in in the in the graph is able to propagate both ways from your customer side of data coming in all the way to your suppliers and supplier. Any changes on the supplier side can come back to the customer side. Because the nodes are connected, any disruption, for example, on a supplier's, factory location, there is a disruption due to a tsunami or, let's say, an earthquake.

Then, how does the system know that down the line, maybe two months down the road which orders are going to be impacted. And that's really the power of connecting the bidirectional propagation, where that information once it comes into the EKG, And, you're able to predict that maybe these x number of customer orders are going to be impacted because that's really where the production happens for those those items. So the examples like that where even on the supply on the demand side, you could have if there's a new competitor event or there's a new customer order coming in, does supply chain have the ability to support that order. Being able to propagate that and being able to understand which nodes in the supply chain will have capacity constraints and pegging that order through the supply chain, those elements that are enabled by the bidirectional propagation.

The fourth one is around scenario planning. So as you know, there's a lot of what ifs that can be done. Where interactively with the platform and the speed of processing and some of the solve ability for us to model aggregate level networks and so on. This solve happens in in real time.

Right? You're solving things in in a matter of seconds, and because of which you're actually able to create a number of what if scenarios So I can say, what if demand for customer x increases by x, by y? Based on that is the supply chain, does it have the ability to support that. If it does, what are the options?

So here, you can run not just your own scenarios where you're tweaking some of the parameters and you're tweaking the demand but there are also system level scenarios that run automatically. So these could be things like your max service scenarios or your least cost scenarios, where you're always evaluating, what if I did not do any, additional cost operations?

So I went with just the least cost. What is my service level? Fill rates based on that. Or if I decided I can spend money, then what is the additional cost of, non standard operations like expediting or alternate sourcing, etcetera. The last one is around postgame, and the postgame comes from really being able to look at all the data, that you're collecting in the system, historically what happened, and what are the drivers trying to do root cause analysis to understand the drivers behind why this forecast versus actuals is is low. Being able to pick and choose why what was there a competitor event that caused it, or was there some other disruption event, or was there any macroeconomic, indicators that happened that caused it where the system can automatically review the data coming in, run some post game analytics on them to come up with, root cause analysis.

Of what happened and help you decide what to do about it. So that was a quick overview of, some of the details behind the EKG. I hope it gives you a better understanding of the elements that make the EKG.

So the EKG itself is a generic framework. How do we bring in some of the reference applications that that allow us to do quick start on these. Next section, I'll talk a bit about the industry reference models, which is really giving you an intro into the various flavors of of the reference applications that come out of the box. There are two elements to it.

So it's all powered by the platform, but using the platform, we're able to assemble a be called reference applications, which is a collection of out of the box applications with a predefined set of workflows, all planning and analytics all baked into it or demand planning or for a supply planning, commercial planning, etcetera. You have a different set of applications that are available, but there's also ability for you to take the platform in itself. Well, I have certain Wightspace applications. I have certain needs where I need to complement existing planning solutions with some additional workflows.

Those can be easily built using the platform framework that we have. And, the reference applications that really follow the same process. They are also using the past elements of the platform to assemble the reference applications. There is also the platform to run analytic algorithms and some of the whitespace applications that you can build by yourself.

Getting a bit more into the reference applications themselves, what is it and what are the benefits So this slide will really talk a bit about that. What we're looking to do is that, improve your time to market, right, than being able to use those package workflows, which are easily extensible. So I think that's the key word here. The package workflows come, with a predefined set of models and screens, but they're not locked in stone.

So those can be easily extended, and that's really where the self-service abilities of the platform come into play. Some of the elements as part of the packaging would be our industry best practices because a lot of the learnings from working with customers, you've seen the portfolio that we've had and some of folks that we have on the team. They have a lot of rich experience in in these industry best practices, and we've tried to take all those learnings and put them into these reference applications.

But they're also continuously updated. Right? So the the one of the elements of this is the model library that I'll talk in a later slide where we're able to collate them and use the model library to package applications and move them forward. And then, obviously, the benefits of the cloud because we are a native cloud platform, then the knowledge sharing becomes somewhat ubiquitous.

Reference applications themselves have a release cycle, which is really what I wanted to talk a bit about here. Where it comes with on with its own implementation guides, we are on a quarterly release cycle. So you'll see updates to the reference applications every quarter with some new workflows getting added. We'll have your implementation manuals, your data models, and things like that.

The last piece to this was around the model library that's natively enabled in the platform.

Where the reference applications is just one of the contents that, that the o team produces.

There's also the industry specific content and some of the knowledge sharing elements Those can be prepackaged. Think of them as the model library being the App Store in the traditional sense. There's a collection of different models. An attach rate model could be just a unit model that's available in the model library, or it could be a complete workflow in its supply planning for configure to auto workflows. Those could be packaged as models, and those are now available as part of the model library for consumption. It's not just restricted to o9. The our vision here is to extend it further where partners such as our SI consulting partners or boutique partners, like I centures and Deloits, etcetera.

They can build their own apps or white space applications and then publish it to the model library which is then accessible to anyone, the clients and consulting partners. Again, if they choose to make it public. So there is a notion of public library and a private library. That allows us to figure out how these models are consumable or not.

And same thing that is extendable to, clients themselves, customers like Calmar, where your teams may be able to create your own custom applications on the platform. Those, you can choose to publish it to the model library store. And it can be used just internally for you. For example, I want to publish it from my sandbox environment to my test and production environment.

Then those those go through the same cycle. You can publish it to the model library from one environment and consume it from another environment. Really. So this is really the the framework that we use for pushing out some of the packaged elements of the platform.

And being able to make it available for for everyone to consume from. That gives us a quick overview, of what the elements of the black form are some of that we talked briefly about the platform itself and some of the elements of the reference applications. So in the next section, let's start getting into the EKG in-depth. As you've seen in the in the diagram before, the EKG is a collection of models.

Let me just quickly go back here. It's a collection of models than in a framework for modeling. So we have our OLED models, network models, and so on. So we'll talk a bit about maybe some of these core models that make up the EKG, that's the market knowledge models, the demand and supply chain knowledge models.

Let's start with the demand knowledge models. Talk a bit about the building blocks. Right? So there's a number of atomic building blocks that make up these models.

For example, in the revenue model or the demand models, you're talking about the products, sales domains, initiatives, budgets, etcetera. So modeling some of these. Again, it's highlighting a few of them. So where your product hierarchy itself may have, let's say, out of the box, you have these six levels, but for Calma, they may have twelve levels.

So what the platform provides is some of these one is the ability to create planning hierarchies on the fly, being able to create these alternate hierarchies, being able to create attribute based hierarchy. So it's not just these static hierarchies that you can see that you can pre create. But being able to aggregate by any attribute that's available, not having any limitation or number of hierarchies, a number of attributes. So those are artificial limitations in most tools, but here, there is no limit.

So how would I visualize some of this in the platform? So let me just quickly switch over to the platform and and show you some of these workflows on the platform itself.

I'm gonna switch over to my, tenant here. Just sample tenant. We can maybe start looking at the example of the EKG. So how do I navigate the EKG?

And an easy way to do that is Google like. So in enterprises, typically, it's extremely hard to find information about entities. But the EKD really, it helps us simplify that process. So for example, I'd like to understand the status of an order.

In the system. So the EKG you can do is I can start querying an order, and the EKG is really going to search for the order and find a a match. Similarly, you can pretty other things as well. Right?

For example, if I want to look at top five Q by gross revenue, So as you'll see, as I'm typing, you'll notice that there's a number of matches that are coming about. And these are all elements in the EKG, and that's really how I'm trying to tie these together what we presented in the previous section and how the EKG is actually coming to life, where it's taking these elements from the EKG, and trying to come up with what are the elements that I've already modeled in the system that I can start looking into. So let's just look at an order, for example, and see how I can look at more information about the order.

So I can get all the information about the order through an order page. Again, here it's not it's very partially populated, but you can look at all the information about the order. And then what it also provides me is context to the order. If I want to review the fulfillment plan for this order, then I can really go into the fulfillment plan page.

So it provides you an active link, but it's also passing the context of this order. When I want to go into this order fulfillment plan page, you'll notice that the order one nine two really got carried over here. So it's past the context of that order and it's giving you fulfillment plan for the order. The order is actually not going to be fulfilled.

There is going to be an, shortage.

And you can look at the root causes and things like that here. Right? So it's really tying the enterprise knowledge graph and trying take you to, relevant information to give you insights and analyze the problem and resolve those. So that's really the power of the EKG and how it's enabling planning.

But what we wanted to get into was actually the modeling of the EKG itself. So I'm gonna go into a different workspace called knowledge graph.

And here, this is basically what we were talking about. Right? So there are various, elements to the knowledge graph. And let me just collapse some of these.

These elements are, again, from all the way from your modeling your supply chain to your market knowledge graphs, to your demand knowledge graphs, etcetera. So knowledge graph would be something what we call on the supply chain side, right, which is really modeling your supply chain networks and and supply chain digital twin. The demand graph has lot of the demand side entities. What we were trying to talk about here, where you have your products, your sales domains, initiatives, budgets, etcetera.

Right? So that would be some of the elements here. Similarly, you have your market knowledge graph, which is talking about market regions, your competitors, things like that. Right?

And then, you have a supply chain details and SCS knowledge graphs, etcetera.

So that's really what we have here. So the knowledge graph really is modeling all of the data. For example, I have my product master. I can look at all of the attribution of the products here, and this will also allow me to add new. Right? So it's not just a static information, but it's, ability for you to add new and extend the graph itself. If I want to add new products, I want to edit some of the attribution of these products, those can happen in the system itself.

And, obviously, I can view a visual of how the hierarchy is organized and the various levels in the hierarchies and so on. Now, how are these actually managed? The self-service elements of this is extremely important to talk about because what we can go into is called designer.

And here, there are two elements to the designer.

There is what we call the model designer. So there is the model designer and then there is the report designer. Right? So in model designer, which is where we're gonna probably spend most of time because it is involved, it is part of modeling the EKG.

So this is where you will have all of the elements around modeling your dimensions, your hierarchies, your graphs, your plans, etcetera, and the rules, all of them come out part of the model designer, whereas a lot of the visualizations of creating these editable reports, part of the report designer. I like to understand the product dimension So you'll see in the model designer, there are these various tabs on the left hand side, whereas a system architect, I have access to make changes to these. So as a planner, I may not be able to change these. But as an architect or an admin, I can go in and look at my dimensions, plans, my graphs, edit my rules, the various elements that we have, we'll spend some time to understand what these are.

So in dimensions, and, again, this is where see that there's no limit. Right? I can model any number of dimensions.

In traditional systems, you'll have limitations on how many dimensions and what are the number of characters in a dimension, and what are the number of levels in the hierarchy, etcetera. Whereas here, how we can go into a product dimension where I can show you what, hierarchies and some of the attributions would look like. So within a dimension, the the way the model works is you can create your attributes of the dimension. Right?

So this is where you'll have your attributes. So I have product. I may have product with one to n. So a different number.

And this is also where you'll set up your hierarchies. Right? So I have my product hierarchy, and then I can create any number of hierarchies here. What is the power here of the self-service is really this editability of these elements.

I can say I want to add a new dimension. So I can click on the plus here. And you'll see this form which says add a new dimension, add a name, and what type of dimensioning. If I want to add an attribute, then that's really again on the fly.

Right? So I can add a new attribute.

I can choose what kind of it's a parent attribute or kind of sorting do I want to do other attributes and and so on. Right? What what's the data types? So all of this think of it as, there is no downtime really required.

Right? So as soon as I'm adding this, it's become part of the knowledge graph without having to do any downtime of rebooting the system, restarting, having users get off system, whereas this can be done live on the system. And as soon as you add this, then the element is available for you to start populating data. Similarly hierarchies, for example, I can add new hierarchies on the fly.

Let's see.

So then, yeah, it's as simple as this. I'm just doing, an example here. I'm just creating a new hierarchy where I can then start saying, let's say, product is my, let's start from the aggregate level. Let's say I want to have product transition or a product type.

And then I have one product as another level below that. Right? So it's really creating a new hierarchy between these. Again, when you have a lot more elements to the hierarchy, it'll become richer.

But this is just a sampling of that. And then in addition, you have your ability to model properties of each of these attributes. Right? Or the product in itself may have a number of different attributes like image, full or product properties, and I have the ability to add my attributions and and so on.

Right? I can create new. I can add, different types of attributes. I can have, string properties and, and, integer properties etcetera.

And all of these are part of the whole the notion of creating a, the enterprise knowledge graph model. So this is more on the product dimension and and the demand knowledge graph, which is sort of what we talked about here. Let me just continue this And then we talked a bit about this where I add new dimensions. I can add new levels, new hierarchies.

I can extend it. Right? And that's really the power of using the model designer.

Similarly on the sales domain side where you have different types of hierarchies. Right? So I'll go back to my screen here.

So on the sales domain, the hierarchy may include some of your channels. For example, so let me just switch back to the knowledge graph need to be channel, your distributor channel, your, your, dealers, those are channels that can be set up. So this will be the difference between the industry models, for example, this is more of a CPG model where I have retailers and online businesses. I have my distributor, direct shipments, etcetera. Those are all ways customers that get organized into channels, have online stores and so on. But this gives you an example of, do you manage the the domain models and and the various models that are needed about across countries, channels? You can have different levels in the hierarchy.

Those would be reflected here as well. And then let's just continue down the track. So then let's, look a bit into the market model itself. Here, these may be things that you may not have today, but these are elements that we can add over time.

And that's really where we're trying to say extensibility of the knowledge model. So today, let's say you only have your demand and supply chain models, but you're interested in adding the market models where I'd like to model competitors, for example. Can, you can have your market knowledge models, so your market graph where I can look at competitors, who are my competitors for on the manufacturing side, on the distribution side. So I can have number competitors and each of these competitors, you can start loading data and creating knowledge out of them.

So that's really the framework here that we have. And these, again, these are modeled through the same elements of designer. Right? So in designer, I can look at a competitor as a separate dimension where you have your market product categories where you may have, competitors your market products, who's the manufacturer, their competitors, etcetera.

All of them can be modeled through here. Right. So this gives you the framework for for modeling your market products, your market organizations, your market regions, all of those elements come into place, and and you're trying to map those to your your actual products and, you know, sales domains as well.

So this is more of the hierarchical modeling where we've tried to model some of the overlap constructs But the power here is now how do I connect that to my supply chain and the supply chain modeling? Which is where the graph modeling framework comes into place Most of you are aware of what the graph really is. It's connecting nodes through edges to create this graph connect is somewhat unstructured in the sense that you can connect any node to any any node, whereas in a hierarchical model, you're restricted, like children can have only one parent. Whereas here in this model, it gives you a more business friendly framework of modeling the network.

And some of the elements that we use in the EKG framework are these. Right? So you have your nodes, which is really some of the attributes and dimensions that we talked about. Products, the sales domains, use the time.

You can obviously add more, the number of attributes of these. Any of these can become part of the note. That we define, which could be a collection or intersection of products and sales, domains, or product sales, domains, time. The next part to that is the relationships.

How do I take those nodes and connect those? Product relationships could be the same product text as a cannibalized relationship with the collection of other products. Our product x has an affinity to another product. So these are very hard to do in a traditional framework for OLAP because you can't have the same item related to itself, you have to add some additional modeling quirks to get that done.

But in a graph relationship, it's pretty natural. So I can have this relationship. Similarly on the supply chain side, you could have your activities, which is really the relationship between your nodes. You have your manufacturing make activities or move activities and so on.

Those would become part of the the graph model for us to model a supply network. These two elements really form the the framework for modeling the nodes and edges, but how they are pulled together is really through these connected models. Right? So now in, each of these models could actually be at different granularities.

My revenue plan my top down targets or revenue plans could be at an aggregate level of, let's say, product category and month. Right? I'm setting my targets at that level. Whereas, my forecasts are at a more granular level.

I could be at a SKU level. I could be at a weak level at an account level. And they may go further down in my shipment for they may be at a ship to location and not at the account location. So these are, again, elements where all the plans do not have to have the same level of granularity.

And that's a powerful distinction with some of the old app models because this allows us to actually save on both, storage in terms of how much data we store and make it more efficient, but also in terms of computational efficiency. So it gives us the the ability to actually run these plans much faster, being able to propagate and link these plans, which is really where the power of the graph modeling comes into place. So I'm able to link, a plan at a different level of granularity to another plan through common attributes that they have. So those those really form the backbone or the framework of the EKG modeling.

And the advantages that we see, this is a slide that you may have seen again in the past about the fake level details. And this was a question that that had come up, is around the ability for OLED systems to actually maintain data and for them, they have to blow it down to the lowest level of detail. And this really makes it cumbersome for information to be accessed in in good time. It makes it very slow.

It bloats the system, when it's not needed. And there's an exponential growth in terms of amount space and as the number of dimensions increase, which is why you have restrictions on number of levels in the hierarchy and number of dimensions you can have. Because of some of the limits of, of these fake details. Whereas in our system, which is a graph cube model, what we're saying is various plans are maintaining them will stay.

So I have revenue plans. When I plan it, I'm planning it at this level. So the data should be persisted at the same level. Whereas, my demand plans need to be more granular, so at a more granular level of detail.

And these two can come together through a common levels of, aggregation and disaggregation, which is where both the overlap capabilities of aggregation disaggregation, as well as the graph capabilities connecting these various nodes come into place. So, for example, I can look at, plans at various levels of detail. So let me just switch to my, tenant here. We had talked a bit about dimensions.

We didn't get into plans, but plans are really a collection of, of these various dimensions and attributes, and which is where we store some of the key metrics. So let me go to my forecasting models.

Where I may have consensus forecast, which happens at porting customer product group three level and a month. So I'm going to run a consensus plan at an aggregate level where I'm getting information from my top down, my product plans, as well as my account plans coming together. So this is an example where, the consensus model is at an aggregate level. Whereas if I go down to my ship to forecast my stat forecast where I'm running my statistical models, then those may be at a lower level of detail. So if you see the stat model that run for sell in forecasting, then I have it at a reporting customer, my product. So previously in the consensus, it's at a product group three.

A reporting customer.

Location is a final level of granularity. So I had I did not have location in my consensus. It's aggregating across locations.

And I was looking at fiscal month into the fiscal week. So this gives you an example where the level of detail in the in the model is different. But these are seamlessly aligned together. So when I go to a screen book where I'm looking at, let me just go to a consensus screen just to show you how these come together. Let me go to consensus.

So here you'll see in the consensus forecasting screen where I have multiple of these elements coming together. So I have this, consensus forecast between two of these, right, which is the bottoms up forecast and a top down forecast.

Now the bottoms of forecast, like you saw, is coming from my lower level of beauty. It's at an account. It's at a weak level. So I can drill down into the bottoms of forecast to actually see the details of the forecast, at a lower level of granularity, which is at the weak level.

But as I can go to my top down forecast and and look at the information there, which then gives me information about the, the grain of this forecast. Is at a higher level of detail, and these two come together. So I can find commonalities between these and I'm I'm doing it on the fly. The system is actually doing the aggregation.

Where I'm looking at a product line level for a specific customer.

And in this case, the bottoms of forecast is aggregated, whereas the top down forecast is at this level. Well. Some of the modeling framework where you don't need to model everything at the same level of detail. There is an ability to model them at different grains and then bring them together as part of the planning workflows.

And this is what we talked about. While the elements here is around how do I spread and, decide to get of this information. In the screen here, what when we were looking at the data, there are a few elements I wanted to intelligently spread this information. Then you have the ability to block and freeze totals and spread the information, but then I can choose what is my spread basis from right here.

So this gives you some intelligence to spreading. You could do default spreading based on weighted averages and how you set those up. Those are, again, part of the models, the model designer where when this measure was created, it had certain attributes identified for this measure. Right?

So let's just go back to look at the designer again, where we can pick a particular measure, forecast override measure where it has certain attributes So we can look at some of the attributes where it says it's editable.

And then what is the aggregation type for this? Right? So it is sumable. And this is where some of the aggregation decide aggregation workflows come into play, where I can define what kind of aggregation it is, whether it's computed average, whether it's average, non null, whether it's computed, first child, last child, which come into play for inventory measures and so on. Those are set up here, but I can also have differential aggregation, which means in each dimension, it could have a different aggregation type. Right? So when I set up aggregation, then it gives me ability to set up the aggregation by the dimensions themselves.

Similarly on the disaggregation side. So the same information, if I were to look at the disaggregation, then for the same measure, you'll see there are disaggregation basis options that are available. So what is my basis for disaggregation?

It can be the same measure by out. It can be a different measure, so I can look at, that information here. And then what are the various spreading types available? Right? So here, in addition to, distribute to leaves, which is the normal method. You could copy to leaves. So you could do integer spreading.

Distribute negative numbers, additive numbers, etcetera. There are a number of options, and you can have multiple basis here. Right? So if I have a basis, But then I could also have if the measure data is null, then what is the basis for spreading? If the measure has no data at particular intersections, or if I have an assortment basis, which means only resort to intersections where this assortment basis measure is, is populated.

So there could be various combinations of these that you can set up to to define the model. And similarly, then the last piece to that is how are the rules find. So all of these are coming together through a set of rules, and think of these as, again, simple rules that you would set up in spreadsheets. Where I have, let's say, column a is equal to column b times column c. And then you have obviously additional constructs around here in terms of modeling.

But the dependency graph really shows you how some of the measures are connected. Again, some of these can get complicated because of the number of interrelationship between the models. But this gives you a framework to actually navigate the calculation graph as well. So when a admin is looking at how the measure was actually being computed, what are the relevant notes that contribute to the calculation, then give this gives you a framework to to trace back the entire calculation similar to similar to what you see in Excel where you can actually review the the entire calculation and trace how the computation was was happening.

So that's really some of the elements here is around the aggregation is automatic based on the aggregation type that's set up. Again, across all your hierarchies, but spreading it as a lot of advanced spreading capabilities that are available. And then the user has a lot of controls around, like, I show You you can freeze totals. You can spread to leaves based on different re spread basis.

And then, obviously, the other piece to it is external market models where the models are not in your, traditional company hierarchies. Right? So I may have data coming in from external market sources. So an example would be let's just go to that example here in my demand plan.

I may have data that's coming in from external sources let me just go to the demand drivers here, where your drivers of demand come in. Right? Again, it may be a few, for example, whether it might be a impact to your drivers, your CRM pipeline, especially with your b to b customers, maybe, an an input to your forecasting, your price plans, your, calendars for marketing promotions, your new product introduction calendars.

All of them are elements that that would contribute to. So each of them is really a collection of data, right, your launch plans for for the new products your weather related information. For example, like weather events, your temperature, etcetera. Those can be captured here.

And again, the advantage here is that all of these elements are at different levels of detail. Right? My weather may be by zip code, by country, by state, Whereas my calendar or my marketing promotion calendar may be at a different level of detail. Right?

So these may be by account.

By product category or by product and by time, by week, etcetera.

Whereas my some of my market data, for example, syndicated data where I'm looking at market size, market share.

These are elements that come in at market level. Right? So my market product category may be different.

And there needs to be a mapping between market product category and your product category, similarly a market share. How do I get data about that. So where you'll have to model market product categories, your organization, which will include your, your, yourself, as well as competitors.

And being able to capture the market size and share. So those are, again, elements of the data model itself in terms of how do we model not just your traditional data models, but also extended market models, which can be at various levels of granularity and various levels of detail. Right? So that's really the market attribute modeling. The other piece to it is really the digital twin and the supply chain models, right, which is where some of these come together.

Again, here it's representing your entire network, not just your network, but your extended network going all the way from your tier one, tier two suppliers, all the way through your supply chain as well. And being able to model all of the elements of the supply chain, like your need times, your supply chain policies, distribution, your alternate lanes, alternate modes, etcetera. We can go back to the knowledge graph screen here. And in the network that I quickly briefly showed you before, basically, it allows you to look at and zoom in to to see more details about the network.

For example, here, you have a supply chain network graph that's showing you in a geo map view, but then you can actually drill down, drill out, and drill zoom out and zoom in. So as you start zooming in, what you'll notice is that it's goes into your digital twin representation of the model itself. Right? So you can go back and look at all of the elements of your your model going all the way from your customers to the ship to locations to your DCs.

Disease to plants, plants to suppliers, and so on. And these, you can extend the models, right, within the plant location you can look at what are my routings to my resources, what are the raw materials that I'm consuming from. So it gives you ability to zoom in and view the the tales of the entire model itself. Similarly on the supplier side, if you have additional models in the supplier, other than supplier locations, your lead times from suppliers, your various plant locations, and within them, how those are getting consumed in your supply chain network.

So these can get very complex. Again, the intent here is that this is auto generated. You're not doing any additional work to actually create the digital twin. This is automatically inferred from the data that we load in.

So these are automatically created, and that's really the power of, not having to to configure these manually, but an automated way for you to visualize your supply chain and and do perform certain activities on those notes as well. Right? So if I want to understand a supply chain node, what are my policies that are set up this particular plant location, what are my routings?

If I need to make changes to the balm itself, then I can manage it from here. Or if there are transitions that I need to look at product raw material, product transitions, and so on. And all of those elements are part of the supply chain knowledge graph, where I can look at my bill of materials, my routing policies, my bill of distributions, routing lanes, procurement lanes, etcetera. And each of them has a set of, policy parameter maintenance that you can go in, and and it gives you a framework. So part of this is the reference model. For the particular industry, where we talk a bit about the bill of distribution, your consumption per your yields, your time phased, quantity produced per se. All of them are are, editable, and they can be managed through the platform itself.

And, basically, the framework that we're trying to position, what we saw in the digital twin, it all leads to this, right, being able to do a better demand supply match. Being able to model your digital twin, and then being able to model all of the inputs. Right? You have various policies, your constraints in terms of cost, your capacities, network models, your priorities, all of them feed into the the digital twin model for the EKG.

And that's able to then run our demand supply match solver and and advanced analytics to then give you your optimized supply plans, your supportability, and also tied to financial. So that's an important element here where the financial models are pretty sophisticated. So any changes, if I need to expedite an order, It'll also give you information about what's the financial impact of that. So that becomes important as you're going through the modeling as well as the the usage of the network itself.

Characteristics we talked a bit about, there's a number of various policies that are available in the system. We talked about a few, but then there's some, like, chipping, receiving holiday calendars, you know, build ahead, build late. There's a number of policies that are available in the system. And those are all accessible again for the planners.

Once you get into the system, the master planning, the workflows here would be where you can come and review your inventory policies.

You can look at your, planning policies, which could be things like your frozen windows, your manufacturing holidays, shipping, receiving calendars.

If you have to model some of those, all of them, you build a head build date limits for for production operations, by resource. Again, this is where some of the aggregate modeling come into place as well. You don't need to set up these policies down to the demand level. This can be maintained at product group levels, where for this particular product group might build a head limit as three weeks or three days.

And then for my I can have overrides at the product level if I need to, or I can have, use the group level data as well. Some of this flexibility is, again, part of the modeling just to make it more efficient and being able to model these in the network itself. Similar to the demand side where we talked about the fake details, in the supply chain side, there's there's also an important, element here. Which is most of the traditional systems, you would model the supply chains at the lowest level of detail.

So I'll have to model it that down to the day or the week level. I'll have to model every SKU, every resource, and so on. Right? And that makes the the the plans extremely slow to run.

So it becomes almost like a batch plan, and I'm not able to do what if scenarios. For example, I have to load for an eighteen month plan where I need to understand long term capacity implications.

Running it at this level of detail is not feasible. It'll almost have to be a batch run, and I cannot do what if simulations, my SNOP, demand supply match scenarios become extremely slow.

Whereas in in the in the o9 case, What we're doing is there is, different levels of detail. So on time, you would look at telescoping buckets where short term horizons and operational horizon, I may do it at a week level. My tactical horizon may be month level. My long term horizon may be quarterly, etcetera.

So you can create these telescoping buckets but not just in time. Right? You can also telescope the other levels of detail. For example, my capacities that I model in the in the operational horizon can be aggregated in the long term horizon.

So when I'm looking at tactical and long term long range planning, I'm looking at aggregated capacity. So there is intelligent aggregation that that is built in as part of the supply chain networks.

Where the data for the telescoping buckets also aggregates, capacities. You can aggregate your resource information, your item information. You don't have to look at, item, unique parts. Those can be planning components that can be modeled in aggregate models.

Depending on the use case where I'm looking looking to understand long term capacity implications, then my model does not have to have this level of detail. And this is really where the power of the modeling framework itself comes into play powered by the EKG. The framework is available for us to model not just detailed plans, but aggregate level plans as well. So an example of this I'm looking at maybe for the next twelve weeks of plans I'm looking at planning at this QB level down to this QB level.

Whereas for the following three months, I'd like to look at it at an aggregate level in time. And and products. So I'd like to plan at the product group month level. And then for the following three quarters, maybe go up one more level.

I look at category quarter level.

And then for the next year, when I'm looking at long range plans, I do at a category year level. Right. So it's really telescoping, being able to do this intelligent aggregation and being able to present information in that format. So if you look at the graphic here, it's showing you the telescope topic time buckets where I have weekly information for the first few weeks, then I roll them up to months and then roll them up to quarters and so on.

Right? So the data itself is the plan information is aggregated, not just the the demand, but also the the resource information, how we look at the capacity utilization, in these various telescoping time buckets is part of how the demand supply match solver runs as well. So those were some of the elements that made up the knowledge graph, what we talked about is, everything that I wanted to show you is is in the system. We may not have time to go through the report designer because that is, again, another separate session in itself, but we will share with you another video that has elements of configuration, self-service configuration of the platform where I can go in and create my own reports on the fly.

I can create my own views. I can reconfigure the reports and things like that. But what I did want to touch upon is the other piece to it, other than the modeling. We do get into the we talked a bit about the demand supply match solver, But there's also the whole analytics piece to it where you're looking at converting these, data into knowledge.

And we do have a very sophisticated analytics framework. And I'll touch a bit about that when I go into designer, you can and everything is managed through the designer framework. So in the designer, there is the rules framework that we talked about, but within the rules framework, there are these plugins. And these are very important because it allows us to not just provide our own online specific plugins but allow customers to actually extend it further.

And so there's a framework here where I can have my r plugins. I can have my icon plugins, I can have my optimization solvers, like LP solvers, etcetera.

And this gives you a framework for you to actually create those plugins. Right? So I can have my pi on scripts, and then I can add my own scripts. And each of these is really, provides you a simple framework.

If I look at an r script, you'll notice that the script code is really copy paste. So for you, if you have an r script, you can just copy and paste it here, and then map it into the fields. Right? So the inputs outputs, I can map it into the enterprise knowledge graph models, which table or which, KPI maps to which, specific measure.

It gives you this, this ID or the framework for you to actually build your own algorithms, build your own scripts, And this really then plugs into building the analytics framework. Again, the the big data analytics is embedded So you can run and create your own analytics examples like forecast accuracy and bias or supply reliability. How do I understand track? Based on actual purchase orders or plan purchase orders, how reliable the supplier has been to the PO comments and so on.

Similarly, your lead time accuracy may where you can track how accurate your lead time, the planned lead time within what you're using in the system today versus what is what should be the predicted lead time that you should be using based on historical analysis. So there are similar such analysis, and the one of the powers of the phones is really pulling them all together. As you can see in the same platform that I have or the same application, I'm seeing a number of different workspaces. Right?

I have all the plans here from demand plans to market plans, to account plans, your JBPs, your IBP plans, which is your SNOP processes, your master planning, your control towers, etcetera. All of these are workflows that are brought together and assembled, and they can be presented in the same platform. All your core planning processes are basically represented in a single home, and they are using the same enterprise knowledge graph. And that's really the key here is that there is no separate data transfer between these various modules.

They're not separate pieces. They are just lego blocks that you can add on, but they are accessing the same data. So there's no latency in data transfer. And all of them are able to pass data back and forth, and you're able to see data changes in real time.

Some of the usability features that are available in the platform, but really here, the intent is not to build something new, but give the planners their experience that they're used to. Right? Where you can use Excel. We have connected Excel.

You can email collaborate with the platform. You can send in emails, and the emails get auto tagged in the system. You can use mobile interfaces for digital operations. We have an LP based search that we quickly talked about briefly.

And then, obviously, it's not a dashboarding tool, but it's an editable interactive application.

Right? So all of the data elements in the screen are editable and they're managed by the users. And some of these we can skip. I think we've covered most of these elements, but, maybe Also, the fact that you can connect external systems is is important.

For example, power bi, you have your own existing applications, BI applications that you want to access. We have ODB connectors that you can use to connect to the system. And maybe, end with some of these key elements here, which is around extensions. The extensibility of the models is extremely important.

And what I showed you around here where we said you can add a new dimension. You can add new metrics on on the fly. These are elements that are part of the other video that we'll share with you that has some of these elements. Similarly, a rule framework and the computation frameworks, those are available here.

And then the UI extension, which is part of the the report designer framework. In terms of drag and drop ability to create new views and reports, etcetera.

So with that, I'm gonna pass on some of these. Again, we've talked a lot about all of these elements. We don't need to cover these. But I hope this gives you a good framework for what the abilities the EKGs are and how the EKG models some of these elements because these are all, result of the EKG enterprise modeling where I can do a bunch of analytics. I can do my planning, being able to configure the EKG, being able to extend it, easily without having downtime, those are some of the elements that are really important and powerful elements the platform itself. So going back to the original screen where we started, I just want to bring them all together because what we touched upon was really this piece. In terms of the modeling framework and some of the self-service abilities for us to create these industry specific models through our reference applications.

Pull them together and then being able to convert them into very usable constructs for, running analytics, getting converting all the data coming into knowledge. Running analytics to provide some insights and then being able to use them in the planning workflows for better decision making and better decision support. So I hope that was useful. That was maybe a bit longer than what I wanted to do. If you need any more details on on some of the the information that we processed today, Please do reach out to the team, and then we can have a more detailed session about it. Thank you.

Explaining o9s highly differentiated Enterprise Knowledge Graph (EKG) - o9 Solutions (2024)

FAQs

What is EKG in o9? ›

The o9 platform is a knowledge-powered analytics, planning, and learning platform that helps businesses make better decisions. The platform is built on the Enterprise Knowledge Graph (EKG), which is a repository of all the data that is relevant to a business.

What is an enterprise knowledge graph? ›

Enterprise Knowledge Graph organizes siloed information into organizational knowledge, which involves consolidating, standardizing, and reconciling data in an efficient and useful way.

What is the use of o9? ›

o9 brings together technology innovations—such as graph-based enterprise modeling, big data analytics, advanced algorithms for scenario planning, collaborative portals, easy-to-use interfaces and cloud-based delivery—into one platform.

What is the implementation of o9 technology? ›

o9 provides real-time data synchronization, offering timely and accurate insights for enhanced operational efficiency and strategic decision-making. We performed the end-to-end integration for volume forecasting and financial planning.

What does o9 mean? ›

O-9, the pay grade for the following senior officer ranks in the U.S. uniformed services: Lieutenant General in the Army, Marine Corps, Air Force, and Space Force. Vice Admiral in the Navy, Coast Guard, Public Health Service Commissioned Corps, and NOAA Commissioned Officer Corps.

How do you interpret an EKG code? ›

Electrocardiogram (ECG or EKG) – CPT and ICD-10 Codes

If a physician performs only the interpretation and report (without the tracing), they should report CPT code 93010-not 93000 with modifier -26. DON'T apply it when another physician already interpreted the test.

What are the three 3 major types of knowledge in enterprise? ›

As you get deeper into research, you may encounter the terms “implicit, tacit, and explicit knowledge.” These terms describe three different types of knowledge–all of which are important for businesses to capture, maintain, and share.

What is a knowledge graph simple explanation? ›

A knowledge graph formally represents semantics by describing entities and their relationships. Knowledge graphs may make use of ontologies as a schema layer. By doing this, they allow logical inference for retrieving implicit knowledge rather than only allowing queries requesting explicit knowledge.

How to build an enterprise knowledge graph? ›

How to Build a Knowledge Graph
  1. Step 1: Define Objectives. Before doing anything else, it's important to define the problems your knowledge graph is going to solve. ...
  2. Step 2: Engage Stakeholders. ...
  3. Step 3: Define Your Knowledge Domain. ...
  4. Step 4: Choose a Platform. ...
  5. Step 5: Building an Initial Framework.
Sep 5, 2022

Why is o9 Solutions growing so fast? ›

o9's growth has corresponded with the company's promotion of the graph database as a better way to do supply chain planning. “We pioneered (the use of the graph database) for planning” Mr. Gottemukkala asserts.

What is the mission of o9? ›

Our mission is to provide global companies with a game-changing planning platform to transform their supply chain, commercial, finance, and sustainability decision-making.

What is o9 control tower? ›

The o9 Control Tower is designed to sense disruptions, translate these into impact, auto-generate scenarios, make decisions and learn and update resolution protocols. In this example we illustrate an example of how the o9 Control Tower deals with a demand surge.

Who are the clients of o9 Solutions? ›

Companies Actively Evaluating o9 Solutions apps include: RBC, a Canada based Banking and Financial Services organization with 86007 Employees. Triniter, a United States based Professional Services company with 20 Employees. o9 Solutions, a United States based Professional Services organization with 2500 Employees.

Is o9 Solutions a SaaS company? ›

Partnering with HCLTech, they implemented an o9-based SaaS solution for supply and demand planning, incorporating real-time visibility and AI/machine learning capabilities. We provided crucial implementation support, ongoing maintenance and enhancements to improve system functionality.

Who is the owner of o9 Solutions? ›

Sanjiv Sidhu, Chairman and Co-Founder of o9 Solutions.

What does an EKG confirm? ›

An EKG can show: How fast your heart is beating. Whether the rhythm of your heartbeat is steady or irregular. The strength and timing of the electrical signals passing through each part of your heart.

What is the passing score for the EKG? ›

The test consists of 80 scored questions and 20 pretest questions. To pass the exam, you must score at least 80%. The difficulty of the CET exam varies depending on your prior knowledge and experience with EKGs.

What is a normal reading for an EKG? ›

Normal ECG values for waves and intervals are as follows: RR interval: 0.6-1.2 seconds. P wave: 80 milliseconds. PR interval: 120-200 milliseconds.

What is the ICD 9 for abnormal EKG? ›

ICD-9 code 794.31 for Nonspecific abnormal electrocardiogram [ECG] [EKG] is a medical classification as listed by WHO under the range -NONSPECIFIC ABNORMAL FINDINGS (790-796).

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Kerri Lueilwitz

Last Updated:

Views: 6362

Rating: 4.7 / 5 (47 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Kerri Lueilwitz

Birthday: 1992-10-31

Address: Suite 878 3699 Chantelle Roads, Colebury, NC 68599

Phone: +6111989609516

Job: Chief Farming Manager

Hobby: Mycology, Stone skipping, Dowsing, Whittling, Taxidermy, Sand art, Roller skating

Introduction: My name is Kerri Lueilwitz, I am a courageous, gentle, quaint, thankful, outstanding, brave, vast person who loves writing and wants to share my knowledge and understanding with you.