Users are demanding self-service reporting and data discovery. Your Framework Manager model does not support fast ad hoc reporting—but it has a ton of metadata and business logic in it. You heard data modules and data sets in Cognos Analytics are IBM’s response to Tableau and Power BI. Which way to turn?
A hybrid approach! Use data modules and data sets to leverage your existing Framework Manager model structures. The result is a nimble ad hoc reporting environment where users can drag and drop the measures they need and reports run fast. And your company continues to benefit from all that hard work you did to build out FM.
In this on-demand webinar, learn about a real-life application that used Cognos data modules and data sets to leverage existing Framework Manager models. Senturus consultant Pedro Ining discusses work he did for a client who wanted to create a high-performance ad hoc reporting environment. He’ll show how a hybrid DM and FM architecture freed users from having to remodel data in Excel.
See demos to learn
- The business use case and resulting data module architecture chosen
- How Cognos data sets were designed and created
- The modeling techniques and challenges
- Various data module features that were used
- Relative Time
- Customizing the Relative Time calcs
- Creating custom calcs like COUNT DISTINCTS, CUMES
- Managing the metadata naming conventions
- Use of COLUMN DEPENDENCIES in data modules
- Issues and problems discovered
Presenter
Pedro Ining
Principle BI Analytics Architect
Senturus, Inc.
Pedro is a veteran data architect. His skills evolved with (kept pace as) the industry through multiple iterations of BI platforms including Cognos, MicroStrategy and Tableau.
Read moreMachine transcript
Greetings everyone and welcome to the latest installment of the Senturus Knowledge series. Today, we’re pleased to be presenting to you on the topic of Cognos Data Modules and Framework Manager, how to combine those for high performance reporting.
0:23
Before we get into the presentation, a few housekeeping items.
0:26
Please feel free to use the GoToWebinar control panel to make this session interactive while we have the microphones muted out of courtesy to our presenters.
0:34
We strongly encourage you to submit questions, as indicated on the slide here, in the question panel. And while we’re generally able to respond to questions while the webinars and progress, we usually wait until the end of the webinar. So, please stick around for that. That’s always invaluable. And if, for some reason, if we’re not able to answer all question due to time constraints, we will post the answers on Senturus.com along with the presentation.
1:03
So, the next slide here, people always ask us, can I get a copy of the presentation? The answer is an unqualified. Yes.
1:11
It’s available on Senturus.com, if you select the Resources tab and then go to the Knowledge Center.
1:17
Alternatively, you can click the link that has been already posted in the GoToWebinar control panel.
1:21
And while you’re over there, make sure you bookmark the Knowledge Center as it has tons of valuable content addressing a wide variety of business analytics topics.
1:31
Our agenda today, after some brief introductions, we’ll get into a data module overview, and some use cases.
1:37
We’ll look at hybrid mode and real life use case that we implemented here at Senturus for a client, look at the requirements, as well as design components.
1:47
And then Pedro will be doing a demo of the features of that hybrid model use case.
1:52
And again, stick around after we do a real quick Senturus overview, for those of you who may not be familiar with what we do here at Senturus, and some great additional, pretty much entirely free resources.
2:02
And then we get to the aforementioned Q and A so introductions. Today, I’m pleased to be joined by my Compadre, Pedro Ining.
2:13
Pedro has been with Senturus since 2010 and has over 20 years of BI and data warehousing experience.
2:20
He has been instrumental in implementing data warehousing systems from scratch and has experienced the evolution of the BI industry through several iterations of BI projects.
2:29
Products included Cognos, MicroStrategy, and Tableau.
2:32
My name is Michael Weinhauer, I’m a director here at Senturus, and among my roles, I have the pleasure of emceeing our Knowledge series events.
2:42
And, in the tradition of our knowledge series events, we always like to get an idea of what’s going on with our audience, and, in that vein, I’m going to put up our first poll, which is, what BI platforms do you use for self-service ad hoc reporting and dashboarding, and, please, select all that applies using Cognos, Tableau, Power BI, something else?
3:08
Or, you’re not doing self-service reporting, so, go ahead. Democracy in action.
3:13
Get your votes in here.
3:16
You must have had your coffee this morning, because you’re getting those votes in pretty quickly. We’re already at about three quarters.
3:24
A few more seconds to get those in.
3:30
All right, I’m going to close this out and share the results so, the preponderance here using Cognos.
3:39
But, one third of you are using Tableau and Power BI. And about 10% doing something else in a very small subset not doing self-service.
3:48
So, that’s great to see that there seems to be a lot of self-service out there and interesting about lots of Cognos.
3:58
Hopefully, what we show you today will be something that allows you to further expand self-service using Cognos in your organization.
4:07
So, our second poll: this is a single select: how widely employed are data modules within your organization?
4:13
These answers are most widely, somewhat not used at all, or, you don’t know.
4:20
So if you don’t know, it’s OK.
4:26
A few more seconds. We’ve got about three quarters of you voting.
4:35
OK, I’ll show that back.
4:38
Good bell curve there. Small percentage widely. About half somewhat.
4:44
Third, not used at all. 6% do not know. Alright. Thank you.
4:49
We always find that interesting to get the pulse of folks that are attending our webinars. And with that, I’m going to hand the floor and the microphone over to mister Pedro Ining, Pedro the floor’s yours.
5:01
All right. Thank you, Mike.
5:03
Well, I’ve done a lot of data module type webinars, and we’ve also been able to apply some of that knowledge with some of our clients, but, for those of you who don’t know anything about data modules and datasets within the Cognos environment, let’s go through a quick definition of what these things are.
5:23
Especially, for our folks out there, just really using Cognos Analytics or using Cognos 10, for just reporting with Framework Manager packages and just doing reports and distributing reports.
5:35
So, the data module and dataset features and functionality in Cognos 11 came in Cognos 11 as a web-based end user focused data blending modeling.
5:47
Transformation tool debuted in Cognos 11.
5:51
And I think, when this first came out, a lot of questions are like, oh, does it replace Framework Manager? Maybe, maybe not.
5:58
What are the differences between the two, so a lot of these questions started coming out, and we’ve done several webinars on these things.
6:08
And this particular functionality within the tool suite now is really IBM’s response to having more data democratization, much like other tools like Tableau and Power BI.
6:19
Where those tools will say, give me the data, our model, that you model it.
6:23
Fact, if you open Tableau right away from scratch? It’s going to ask you, point me to the data. And, Power BI wants some data point.
6:32
All the previous versions of Cognos let IT do the work. They’re better with that, and then, we’ll just serve up the data and then, you can write reports against that.
6:50
Cognos had to really respond to the way the new tools support data democratization.
6:59
And, with the release of 11.1 and its several releases it started closing some of the technical gaps between Framework Manager and data models.
7:12
And with 11.2 out, IBM continues that with several bug fixes and the interface improvements within the actual 11.2 release.
7:26
Another thing to make note of it, with terms of Framework Manager versus data modules in all future development resources with IBM will be focused on the data module component of Cognos Analytics for enhancements. Framework Manager will be there
7:40
it’ll be like Cognos Transformer and will not be deprecated at all.
7:47
And, we’re going to show you actually in this webinar how we can leverage some of that within this particular use case.
7:56
So, datasets are basically a feature within data modules that allow us to create summarized data subsets that are stored in Cognos servers. And that’s a key component of what we want to show today.
8:11
So, we’ve been doing several similar projects over the last year at our client sites, and just from my personal experience, there’s been a lot of interests now. The 11 series of products have been out for three years.
8:28
And we’ve seen a lot of different use cases.
8:30
And people really are trying to figure out what to do with this stuff. And some of the things we’ve seen are people want to create fast and want to create dashboards.
8:43
Our reports are slow.
8:45
We’ve seen cases where they quickly have created datasets and put them into data modules, and we were able to create fast dashboards. Even reports, we’ve seen use cases to where the slow reports, which have been hidden, had been hitting the back-end databases are always slow to run. But, they’ve been able to extract certain pieces of the reports and all the data for the reports into a data module,
9:11
raw data set and a dataset and make those reports run faster, Kind of one-off scenarios to use that.
9:19
We actually did a project where we did a POC as a replacement for Transformer. We did about 75% of the functionality of Transformer for the client. It was kind of mulling over the benefits of doing that. And we’re able to mimic some of the features of the Transformer world.
9:37
I think one of the biggest issues with Transformer right now in the Cognos world is that people still love that Analysis Studio interface, instead of the reporting interface. But, soon, that will have to be transferred over into the new interface and Cognos 11, so, we wanted a POC for that.
9:52
And, it was a lot of downstream manual Excel processing that where people use Cognos to extract data and create multiple files in Excel.
10:08
Well, they’ve been able to push that back up into Cognos by instead of extracting report files or text files that have been able to extract data sets and do some of that manual Excel processing within the data modules.
10:27
But, one of the more interesting projects that we’ve done recently is actually a client who wanted to leverage existing FM model metadata to create datasets and to summarize subjects, specific analytic layers form better ad hoc reporting and dashboard creation.
10:49
In this one, the client itself had a very good FM model implementation of their data warehouse.
10:56
So, we found this use case to be interesting and one of the webinars I did talked about different architectures, called
11:04
architectural use cases for data modules and datasets. And there was one where the data model was pointing right back to the database, and you did all your modeling raw against the database tables.
11:15
And there was one I call a hybrid model, where the FM model was actually pretty good, was against the data warehouse. All the business logic was there. All the naming of the fields were there, all the joins were there.
11:32
But in this case, unlike a lot of FM models, the FM model was very granular.
11:37
In this particular case, it’s order line by day by SKU.
11:43
So much of the reporting in that data package was good for operational reporting.
11:49
But they actually wanted a simpler, faster performing layer for ad hoc reporting and dashboard creation.
11:58
So, if they had to go against the package, the biggest complaint was as slow as too much detail, and even, send me a better package is, might not be well structured for ad hoc performance.
12:12
And a lot of career audit calculations were not maybe stored in there, so it was a great case.
12:17
And if we look at how we created the hybrid model on this case, this diagram is showing what we did for this particular use case.
12:26
So on the upper left here, we have our databases. And when is it could be a data warehouse it could be the OLTP transactional system. And generally, what happens over time is a lot of FM packages were going to be using a dataset.
12:42
Here for the webinar from Microsoft Wide World Importers, data warehouse.
12:47
These packages were modeled against that data warehouse or even an OLTP system and they’re generally maintained by IT.
12:56
You can call them legacy, but it has really solid metadata. All the data mappings are already done there.
13:04
You don’t have to re-invent the wheel, you don’t have to point the data module back to the databases and try to figure things out.
13:10
OK, so, what you can do in this model is extract datasets from those packages, leverage that, it could be dimensional data sets, like I’m showing here, and also factual datasets.
13:25
And then, you could create and put those datasets into subject areas, specific data modules.
13:34
Then, what we can do is also integrate and take the uploaded files of our analysts, things like third party data, budget data, Excel files, and also integrate that into our data module.
13:51
And, these subject areas, specific areas, could be things like for sales, it could be for supply chain analysis, OK? It could be for any different areas, and it’ll be performing off of datasets, which will be a higher performing data module area for people to query against.
14:09
So, the common use cases for this hybrid model, Um, it was going to satisfy specific requirements by the end users for large, legacy Cognos organizations with a lot of packages. They’re very stable. They have a lot of data integrity, and they had the blessing of the central IT organization.
14:30
For this use case, we wanted to create a couple self-service analytic layers.They wanted an analytics layer for supply chain.
14:40
So, they had data lag from supply’s, sales forecasts, budget.
14:47
At the skew level, maybe not at the skew level, they wanted all those.
14:50
Essentially facts in the same area, and to be able to kind of drag and drop and create dashboards and reports.
14:58
And very easy to use the self-service feature.
15:02
They also want an analytic layer for just sales, but maybe at a higher level, for a month level, Also, maybe at a day level, but maybe only at a customer ship to so varying grains of data, not mimicking the very granular data back in the package was the definition of these analytic layers. And of course, we’re going to utilize that FM model as a source for the datasets and integrate everything.
15:26
An external file in the data module itself.
15:31
Part of the use case requires that they really needed to have done, which is kind of difficult to do off a straight FM model, even in reporting. They needed dual year over year comparisons. They needed to do things like year to date, versus last year.
15:49
Month to date, quarter, today, The current months.
15:54
Now, the current quarter versus the prior month, all these kinds of comparisons, where we required to do, a, lot of you out there who had to write reports against FM models. How to do a lot coding within the report to kind of get to this?
16:06
So, they wanted that to be a requirement when we created these models, we’re going to show you how we did that.
16:12
We leverage the relative time calculations that come with the product.
16:17
But, one of the requirements they needed was to be able to compare, not just prior year,
16:23
but they wanted to compare two years prior.
16:26
And this is something that’s not coming out of the box, but we were able do what the customer wanted to with Cognos. We provided them not only two years, but three years and four years ago, with these relative time comparisons.
16:44
In today’s analytics, comparing sales to last year is not a good comparison. They might want to compare things to 2019. So, although Cognos has done a really good job with relative times, they can’t think of all these things upfront.
17:03
And we’re going to show you how to actually change the relative time clocks for that calculations like cumulative running tools. They want that built into the data module.
17:13
So, the short story here is they wanted as much calculations done in the data module, leveraging the summarized datasets coming off the Framework model, so that the analysts have to do more drag and dropping and not coding or calculation building off of that.
17:33
And, for this specific use case, here’s some of the design components that we came to. When we designed the datasets.
17:42
or an analytic environment or summarized environment, you need to really think about what you’re doing and narrow your data set definition, because we’re not going to bring all the rows back from your operational system.
17:56
In this example, the FM package had order detail which contain about maybe 50 million rows of data, but what they really wanted to do was create several datasets, which were summarized for specific levels of granularity.
18:10
For example, they wanted sales order data for the last two years, plus the current year.
18:17
But they only needed at the month level and the SKU level, and this was done for maybe supply chain analysis, who tends to live at the SKU level.
18:25
We were able to summarize that into a dataset file.
18:30
At about one point three million rows, it’s only nine megabytes in size from a disk based size or memory footprint size for the dataset.
18:38
Dataset number two, they wanted two years plus current year, but this woman was going to be at the day level, but at the product line level. So, they took away SKU. And this was for analytics, not necessarily supply chain.
18:51
But for the people on the sales area, market, they didn’t need to know what a particular SKU did. For the last couple of years, one million rows was able to shrink that down, too.
19:04
And then there were the analytics they want to do across five years.
19:09
Maybe even at a higher level, at months.
19:11
Customer, product line, one point eight million rows, 16.2 megs, so those numbers are not very large. And I think you might be surprised.
19:18
I’ve worked with some customers with very large backend data warehouses that after you went through the analytics requirements phase, and you came up with what you thought you needed for analytics.
19:30
And you actually did your dataset building, you might be surprised at how well those numbers shrink down to for what they need, Most people don’t need the billion rows in the data warehouse, they need a subset of it. And this is a great way to shrink that stuff down.
19:47
Like I mentioned, in terms of the design components, we did have to customize out of the box relative time to include fire prior years.
19:56
We also had to create custom calculations.
20:02
relative time, calculations, we’ll talk about that.
20:05
And, this is always evolving in terms of data modular best practices and how to structure things.
20:12
And I don’t think there’s even a cohesive, comprehensive document that recommends that. There’s some great documentation, IBM on how to do data modeling with data modules. But actual implementations on how you store your datasets, how you bring things together, is still evolving.
20:29
What I’m seeing in the industry, and one of the things we used was something called a dimensional data module library where we created a separate data module, just for dimensions. And we’ll show that how we integrated that into another data module.
20:43
So, for a demo, for this webinar, I’m going to show you some aspects of how this was implemented.
20:51
Now one of them, we’re going to show a dashboard that was created against the analytics layer to show you how easy it is now to actually do your analytics.
20:58
We’re going to review the practical use of a dimensional library data module.
21:03
And, then, we’re going to do a high overview, because some of the customization of relative time functionality, that one has a little bit of code in it.
21:12
And I’m writing a blog with much more detail, but it’s trying to give you a flavor, in a sense of how that particular piece works.
21:20
I’m going to tab over to our Cognos environment and what you’re seeing here, too, by the way, is the new Cognos 11.2 instance. And, I’ll tell you right off the bat, I like it. I think it’s a good implementation it is as much snappier. The icons are better as based off IBM’s carbon design philosophy, which maps to all the other tools out there, that they’re building on the web. And a bunch of other things have been added to it.
21:47
Think we have no other webinars, 11.2 out there, but from my personal take, I’ve gotten used to it already, and I think it’s a definite improvement over the interface.
21:57
Of 11.1, there’ll be some training and evolve, but I don’t think it’ll be that much of a shift over, OK, so, just want to let you know what environment are in it.
22:09
I’m going to go ahead and run this dashboard real quick, that I’ve created.
22:14
And one of the things you saw, as you can see right here, dashboards load much faster.
22:19
I mean, this is not a lot of visualizations, but I’ve had some dashboards that I’ve had a lot of visualizations, and the render time is definitely 50% faster. And as you can see here, that was pretty quick. And, of course, this dashboard is going against that data module. That analytics data module, that we created with dataset. So, nothing’s going against database.
22:43
And I really recommend, if you’re going to create dashboards, create them against datasets and data modules. If you can, you know, that would be my first reached, my first thing I’m going to ask. Can we do this? And datasets and data models are proper render time, especially if you interact with it. So one of the things I’m showing here is really on the top, you can see some of those little cards. Basically, KPIs. It’s actually using the KPI widget out of the dashboard tool. And this is what they really wanted to see. and they wanted to be able to do this easily. They wanted to see, for example, month to date year, over year, sales have, given by current date, in the month. If it’s August 12th, how am I doing?
23:22
August 12th through August of last year. And you can see here, they’re up about 11%. They wanted to see that right away.
23:28
Quarter to date, for the current quarter, year over year sales, and I wanted to see what that particular KPI is, and then we’re down about 0.2% were, but, even there, year to date sales.
23:39
Prior year, full sales. Also, prior year sales, two years ago.
23:46
It looks trivial up there, but if you were to opt to create that offer, just a simple, structure, which just has sales amount. And then, you have to give that to analysts to try to calculate that. Probably scratched their heads and take it down to Excel. Do some different.
24:03
So, the other thing I’m showing over here is another graph, which is showing monthly sales for the last four years.
24:10
And if we actually expand that, get a better look at it.
24:16
It shows you current year, prior year, two years ago, and three years ago, at a monthly perspective.
24:25
The green one here is the current year, and for this day, the, the data ended in May for the current year, is showing you the monthly sales. You can see how is comparing against prior year’s sales, showing you where you are on a month to month basis.
24:41
And we also have the last two years ago and three years ago.
24:46
And as we went through the requirements, they also wanted to be able to show this from a cumulative perspective.
24:52
We wanted to see, for February, that should be January plus February for March has to be January, February, and March.
25:00
And they wanted that to be also drag and drop.
25:02
So, if you look at the cumulative graph over here for the last four years, now we see the cumulative sales for the current year, cumulative sales for prior two years ago and three years ago, as you can see the cumulative sales for the current year’s flatlining here because we haven’t hit June.
25:21
Then here is your prior year, two years ago and three years ago kind of correlating with the current with the last year. But this is what they wanted to see.
25:31
And we wanted to be able to create this easily, OK?
25:34
So some of the requirements, So, this is some of that, and this one over here is just really showing me that I’m tying it out.
25:41
It’s just showing the fact that this is the full year sales for 2013.
25:45
If you look here, this data is actually showing current year, 2016.
25:49
So that was just a way for me to kind of tie things out.
25:53
Let’s go ahead and edit this dashboard and show the actual data module that is based on.
26:00
For the purposes of this webinar, we kept things kind of simple.
26:03
We’re just slicing things by date’s locations and products but if you look at the sales folder where I do have the ultimate sales fact table, we have other calculations in here.
26:16
And we have the sales table which has your basic measures. Generally, people are presented in FM package perspective Just they just they just presented with this on a particular amount, slice it by dimensions, but they want to be able to get to these more.
26:33
Yeah, complicated kind of measures, right?
26:35
Well with relative time when you embed relative time into the fact table, if you expand the measure here, you are not presented with the slices that IBM allows you to do.
26:48
OK, and then you have things like the prior your Sales, which we can drag here into the graph.
26:54
But then what we did, and for this webinar, we added prior relative time for two years ago and three years ago, so that the graph can be easily made.
27:06
So how does that look like from an interface perspective?
27:08
Let’s go to the tab and we’ll go here to visualizations.
27:14
We’re going to drag the line graph over here.
27:18
And we’re going to recreate that cumulative sales graph.
27:22
So, they wanted it by month on the X axis, and they wanted to see current ear prayer, et cetera, sales, from a cumulative perspective.
27:37
So, what we did was, we created some cumulative accounts, These cumulative costs are based on those relative time measures. So, if we dragging current year here.
27:50
You can see that being created will drag in another measure.
27:56
That’s your prior year.
28:00
Prior to years ago, and prior careers ago, Basically done beyond just formatting this graph.
28:08
Adding, changing the format of the measures, maybe be currency, which you can do over here.
28:17
Beneath it, we’re going to make the currency US dollars.
28:25
Then we’re done, right?
28:28
Quick, easy, drag and drop. This is what they want to do.
28:32
They want these measures available, OK.
28:35
So, how do we do this in the data module? We’re going to go ahead and drop into that right now.
28:42
I go back to my content.
28:45
I’m going to make comments. The icons are much better dashboards a lot tighter. I think the rendering of it is quick. They really tightened up some of the visuals.
28:55
So just an FYI on that as you kind of started kicking the tires 11.2, I’m going to go back to my content and over here I have the data module that it’s built with my data module and again, it’s a very simple data module.
29:14
And I’ll make a comment, too, that everything IBM does is perfect, right?
29:18
And, one thing that I noticed about 11.2, and I think on that last fixed back for Release seven was before, when you organize these objects and save your data module, it would stay in this particular location.
29:34
It looks like with the latest release, it’s not staying and that was a bug prior to the time they fixed it. So, I’m hoping they fix that just an FYI on this particular piece here.
29:43
And here we have sales, locations, products, and dates, OK, very simple, but if you look at sales, that likewise showed you, there is our relative time over here on the sales metric.
29:59
OK, and the reason why we have that is because on the metric here, on the Properties, I have a lookup reference back to dates over here.
30:09
Again, we have a webinar which kind of goes over this particular implementation, how to do this in depth.
30:16
But the actual calc is very simply if you look at this.
30:21
If you edit this calculation, it’s just a running total.
30:24
Really simple, but we’re leveraging the relative time slice or metric that was embedded to automatically after you implemented relative time.
30:37
So all I had to do here was, say, if I want the running total for the current year, sales amount, I just go and drag over current year sales into here.
30:47
OK, Then, same thing with prior year, if you look at this Edit calculation is the same thing, but I’m going over here to Sales Amount, and then, also, going ahead and dragging my prior year, the same thing for prior two years, and prior three years, we’re leveraging that for that particular calculation.
31:14
So, those are some of the calc, and the other aspect of this implementation, you know, Over here, we’ve got our sales fact dataset. Again, this is the dataset. We have all our datasets created, I’m going to go back to content over here.
31:29
So, we created a location dimension data set, a product dimension dataset. The sales fact dataset.
31:37
But if you look at this data module, the sources here show you mainly one fact dataset.
31:44
And what I’ve done is linked back to a dimensions library, which is a data module.
31:51
So, over here, I brought in dimensions from a dimensional library and brought them into this data module.
32:02
And you can see by this little green icon that it’s linked back over here. What’s the benefit of that?
32:09
So, if you are always creating data module, different data modules for different subject areas, there’s probably a good chance that some of the dimensionality is always going to be re-used. You might be creating a supply chain data module, but you’re always going to do it by products you’re going to do by locations.
32:27
If you always bring in that same dataset, and then change the names, and everything, all, back to the way your data module for sales was. It’s a lot of work, or repetitive work. As people in IT, we want to leverage things that we’ve already done.
32:44
So with this technique here, we added a dimensional library data module, which already has the dimensions in place and drag them over here. And any changes that happen over here will be reflected over here.
32:56
I’m going to close this one down and show you that dimensional library, Here’s my dimensions library.
33:06
And here’s where we actually bring in the datasets so that I’ve renamed him.
33:13
Not dataset.
33:15
I’ve renamed and locations and if I go ahead and rename this particular field, for example, to State.
33:27
And then save that.
33:35
Then bring in my other data module.
33:41
As you can see, it inherited that.
33:44
And also, as an aside, when you work with data modules and you’re starting to renaming things, you generally don’t have to worry about it.
33:53
Unlike framework manager, where if you rename a column, you break reports.
33:59
Because if you look at the name here, unit price, for example, under properties, there’s a label.
34:06
And if you go under advanced, there’s an identifier.
34:09
You could this you could change this label of many, many, many times, and it won’t break your reports, because what the reports or dashboards are really referencing is the identifier.
34:18
If you change the identifier, then you will break reports.
34:22
So, they’ve kind of have a two layer part of it, as part of the best practice, you create your data module, make sure your identifiers are consistent.
34:32
The benefit of doing that is when you actually look at SQL from reports against data modules, you’ll know what it’s referencing. So, back to my original point.
34:42
Is now, I’ve leveraged dimensions library, then a different fact, I could bring other facts. I can add the dimensions library, and I can then bring the new dimensions over here, if I need to.
34:56
But this could be maybe an IT maintain dimensional library and they can show how it looks, they could put filters in there and things like that.
35:06
OK, that’s one of the use case that we evolved our best practices from, We used that because we were building not just one data modular, 2, 3, 4 data module, but they’re all leveraging the same dimensions. So, we learned from one of our clients. Finally,
35:26
let me talk about the relative timepiece, so the relative timepiece.
35:33
Here, under date, they have all these different filters as well.
35:37
Besides the measures, we added these two things, right?
35:42
And let me go back to my dimensional library.
35:47
For those of you who have, never implement a relative time, we have a webinar on relative time.
35:54
And this is going to kind of build on that.
35:56
So I suggest you kind of look at that if you get a little lost from what I’m showing you over here. But the dimensions library over here has dates, and it has something here called Gregorian
36:07
calendar, this is what drives the relative timepiece. We stay here in our dates dimension.
36:15
We are going to do a reference lookup here to our Gregorian calendar.
36:22
What Congress provides you in your content under team content.
36:27
If everything is installed, the way your administrators do it, kind of out of the box, there was a calendars folder.
36:34
And we have data modules for Gregorian calendars, for fiscal calendars.
36:43
They give you source files, tools, to create your own calendar.
36:48
Everything is in here.
36:50
And the way this works, if you look at the Gregorian calendar here.
36:56
Everything is kind of hidden, But the key source is this data module and is using the same technology that we’re using when we create other data modules.
37:05
It’s just importing a source called a CSV file, a gorgon calendar CSV file.
37:12
And if you actually report after that and bring it down to Excel, I’m going to bring it to Excel here.
37:20
It looks like this.
37:22
It’s just a bunch of rows.
37:25
And you can take this Excel. The first column is called … Date.
37:31
And then, the previous day, relative to the date, the next day, relative to the date, the year of the date, which is like the first year, like, just January second, but the years anchored at the first day of the year, issued prior year, OK, when you take this file down and you manipulate it, and then what we did, we added two columns to this spreadsheet, and called it two years and three years ago, prior, two years, prior three years.
38:01
All you have to do is just create a little simple Excel formula to make, this two years ago relative to that date and populate it with that all the way down, and also this one.
38:17
As you can see, current year, 19, 92 years ago, in 1988, 1980s, you create this file.
38:27
And then I’m going to go back to my content over here.
38:32
And uploaded to your cognizant environment.
38:35
Here’s that file.
38:37
And then what we could do, is just take a copy of the Gregorian calendar they gave you create your own.
38:43
And I created one here called Gregorian calendar demo.
38:48
This particular data module actually sources from that use CSV file that I uploaded.
38:55
And if you look at the data on the grid, it shows you here, the two years ago and three years ago, columns, now you have the column’s, it needs to create the calculations for relative time.
39:08
The final step, which I would say is a little bit more trickier, is the kind of copy what they did over here. Remember, they didn’t have the 2 year 3 year, but they had prior year.
39:17
If you actually edit this filter it kind of shows you in a macro substitution expression language how they did that.
39:27
And if you figure this out, you can copy this and create your own filter.
39:37
Now, this will take a little bit of work in terms of trial and error.
39:42
They are using a query value function, which if you’ve been knee deep and a framework manage, you have this as a macro substitution type of function. We have some, some expression of substitutions here.
39:54
But we’re going to write a blog on this, too.
39:56
I could get into the details, but as you can see here, it’s actually referencing that column two years ago, and prior year is creating a between dynamic between SQL syntax as you create those slices.
40:10
OK, now that I’ve done that, I’ve created my own.
40:18
Gorgon calendar, data module based on the one provided by IBM, then incorporated that into my dimensions library. So, that’s always there for use.
40:34
Then, incorporated that into that data module that was used.
40:41
four, the dashboard I showed, OK, so, it’s a use case. It’s a series of best practices, is a series of things that are kind of evolving.
40:52
But, ultimately, you want your end users to be able to create something like this fairly easily.
41:01
Somebody’s calc is kind of a theme that we’ve been doing for the last year with other data module type of Cognos webinars. We’ve talked a little bit about relative time. We talked about different architectures. We talked about the differences between FrameWork Manager and data modules or whether you should replace it. And this is actually a very interesting use case, because you’re kind of leveraging all the work. You did a framework managed, but you’re taking a piece of it. Then you’re creating new functionality with datasets and data modules to produce this in a much more easier manner.
41:29
OK, so let me go back to my slide deck here and see where we are.
41:35
So, in summary, kind of the things that I was saying, there’s, there’s a lot of features in the new releases. Level 1 to 11 to customers are really starting to comprehend what they are.
41:47
Well, how to use them, right, those use cases and best practices with the data module datasets are really evolving over time.
41:56
And one of the things that we’ve learned is things like, the relative time we can custom customize it, we can better, better architecture with potentially more data model library concept. And we can always still leverage existing framework, manage your infrastructure. That’s an important piece.
42:17
Because I think early on when data models came out, well, do we want to replace our, all our FM models? And that’s a lot of work, right? And there’s a lot of good FM models out there. But we could leverage it. We can create a new analytics architecture.
42:28
And then leveraged data models and datasets.
42:30
We’re creating micro marts, There’s data marts in the database world. And these are little micro marts. I mean, Tableau can do data extracts as well and Power BI.
42:46
But you actually now can create this data module in Cognos and they’re like, micro versions of your data warehouse and they leverage the in memory processing of Cognos instead of the spread matrts, instead of people taking things down to a spreadsheet and, basically, grading the same thing.
43:04
Relative time, can be a functionality that can be customized.
43:08
So, Senturus, as we’ve, as a company, as our consulting services have evolved over time with Cognos, we can definitely help you discover new ways to leverage your Cognos analytics and investment.
43:22
If you don’t want to really get into the complexities of trying to customize your relative time that out of the box or all those calculations, and have the best implemented data module with datasets, we’ve, we’ve done some several projects on that and have come up with some evolving and best practice practices, so, you want to evolve your investment beyond just simple reporting.
43:45
I think Cognos obviously does that very well, Great PDF. Perfectly pixel perfect reports always has, but this is new functionality.
43:56
That has evolved over time and is, I think really starting to gain some traction.
44:01
So, that’s the webinar. I’ll turn it back over to Mike.
44:09
They’ve been pretty mature with some of this stuff, as you can see, that you can really move the needle on self-service analytics.
44:17
And, frankly, I’m a little surprised that Cognos and IBM haven’t made more with tha exact feature.
44:24
Those relit that automatic creation of relative time buckets. That’s easily customizable. Because, that’s the first thing you need to do, and it’s so powerful to be able to do those comparisons. And, it’s very difficult to do.
44:37
If you don’t have something to help you do that, So, it’s real powerful functionality. So, thanks for that great information. Stick around everyone for the Q and A afterwards, just a couple of quick slides about Senturus here.
44:51
We’ve been doing these Knowledge series events here for well over a decade, but on our website, we’ve got all kinds of other great technical tips.
44:59
Our blog, we’ve got product demos, our upcoming events that you can register for like this, and more in depth, information on all things, business analytics.
45:13
And, specifically, if you go to the next slide, we have a couple of links here for specific Cognos data module resources.
45:21
There that you can again, you click on those links, or you can search for those titles, data module architecture, and use cases successfully, self-service analytics, and comparison of Framework manager versus data modules.
45:33
And I’m sure you did at least one, if not all of those, Pedro. So, you can hear his voice even more.
45:41
So as Pedro mentioned, our expertise focuses solely on modern BI with a depth of knowledge across the entire stack.
45:51
So we can help you accelerate the adoption and implementation of self-service analytics at an enterprise scale.
45:59
And our clients know us on the next slide for providing clarity from the chaos of complex business requirements, disparate data sources, and constantly moving targets, changing regulatory environments.
46:10
And we made a name for ourselves based on our strength: Bridging that gap between IT and the business.
46:16
We deliver solutions that give you access to reliable analysis, ready data across your organization, so you can quickly and easily get answers at the point of impact in the form of decisions you make, and the actions you take.
46:28
We, as mentioned, in a couple of slides earlier, we offer a full spectrum of BI services, are consultants, are leading experts in the field of analytics, with years of pragmatic, real-world expertise, and experience inventing the state-of-the-art.
46:40
I’ve been doing business analytics for over 25 years, all and that’s not unusual at all here, at Senturus.
46:51
In fact, we’re so confident, and our team, and our methodology that we back our projects with a 100% money back guarantee that’s unique in the industry.
47:00
We do offer a complete spectrum of BI training, All the different types of modes that you would expect. So, tailored, customized group sessions. Small group mentoring, instructor led online courses and self-paced learning.
47:14
We’re particularly ideal for organizations that are running multiple platforms or moving from one to the other. So, folks like Pedro speak all the different languages of Cognos and Power BI and Tableau.
47:27
And we can customize.
47:30
The training is such that it meets the needs of your specific organization.
47:34
And we’ve been doing this for a while. We have been focused on solely on business analytics for over 20 years.
47:41
At this point, we’ve worked across the spectrum from Fortune 500 down to mid-market companies, solving business problems across virtually every industry.
47:49
And many functional areas, including the office, finance, sales, and marketing, manufacturing, operations, HR, and IT, our team is both large enough to meet all of your needs, yet small enough to provide personalized attention.
48:03
If this sounds appealing to you and you think you might fit the mold, we are hiring.
48:07
We’re looking for you can see the titles there, everything from project managers to ETL developers and, and in-between.
48:15
So if you’re interested and looking at job descriptions, you can see the link there, or you can send. And you can send your resume to jobs@Senturus.com.
48:26
And with that, we come to the questions.
48:30
So I don’t know if you had a chance to look at that at all, Pedro, but there’s a question there about Is there a way to check which reports you use a particular data modules like?
48:40
We have an option in Cognos Framework manager to show object dependencies.
48:47
I would say, you’d probably have to look maybe more at the content audit database.
48:54
Oh, could we actually done some work in that where we actually doesn’t decodes and kind of extend it a little bit and think, Michael, you only have a product like that?
49:05
That we’re marketing to, or something like that, but, uh, yeah, I would use add database to kind of find out the relationship between the two, sure. Data module to package to report.
49:15
The audit database, information and then yeah, we can, we do have a product that, that helps us migrations and it sort of catalogs, all that information and provide you with combines it with audit database information and helps you sort of figure out. It’s like the decoder ring for all your content. If you’re if you’re doing migrations but it’s a little different than it looks like this is more of an operational type thing.
49:38
Someone was asking about the QM fact using the running total up the sales.
49:45
In fact, need to ensure that the data was sorted ascending by date, and so how do you specify that sort order?
49:54
So ranked told, obviously, are not completely perfect. The graph, add the month there, the month was already sorted because of the data module for an attribute.
50:04
I can say, for example, a short label like month can be sorted by a different column, which is a month number.
50:12
So then, when I put the two together in the graph, she saw it when, January, through December, properly, and then it took the data, and added it up that way. In the graph. Right. So that’s how it actually came up the sales facts, sorted correctly in the particular implementation of that visualization.
50:32
So you started at the, at the visualization level, data module level, the label from month lasorda, and data module.
50:40
So every time dragging the label months, it’s always going to go January through December, not alphabetically, because behind the scenes, you can, as a property of an attribute, you could sell how? He could tell it how you want it sorted, and it could be a different field. And then our date dimension. There is an attribute called month number.
51:02
And then, we sorted by that. And then, you put two together, and that visualization, that’s how it accumulated at it, correctly.
51:09
Great question. How do you keep calendar current?
51:13
So, generally, the CSV calendar is kind of creating one time because if you create a calendar with rows, like the one to fall from Gregorian calendar from Cognos, I think, 1950 to 2040.
51:30
Much credit.
51:31
You’re done way out there. I think the question is maybe those relative time periods. Right? So, tomorrow, it’s your time periods change. And that’s because it uses that macro language. So, it kind of runs against that and calculates.
51:45
Yeah, well, the actual, then, I didn’t touch on this real, do touch it on and the creation of relative time.
51:51
I got the question, well, is relative to what there’s a parameter built-in the Cognos called the As of date. And, by default, is, generally, the system date.
52:01
So, today’s date, so it’s anchoring against that.
52:04
Now, you can change that behavior by exposing the as update parameter, and then dynamically changing it in the interface.
52:12
So you can change the anchor of the date by user, a user can do their own thing if they wanted to do that.
52:21
Interesting. And then the, there’s a question to the Gregorian calendar that you used in this situation.
52:28
Did it come from the sample’s? Yes, it did.
52:31
What I did was I, used the Gregorian calendar, and I saved it as my own, or a different name.
52:39
Then I modified Gregorian Calendar to link to a different CSV spreadsheet, which is the one I modified, the additional columns.
52:51
And have you, there’s a question about, have you created rolling period and, or by any M calculations for relative dates?
52:59
Interesting.
53:01
That’s expanded by an M you know? That means they’re like, it’s like, biennial like, maybe.
53:07
Uh, I don’t know if that’s every other millennium. I don’t know, I’d have to Google that. I’ll.
53:15
Just say one thing, you know, From what I’ve seen in there, if you’re good at coding, and figure out how they’ve done things, like one, rev, like the initial rollout rollout a relative time didn’t have weak slices, like this week versus last week.
53:34
Then the next Rev, they put it in the week.
53:38
They added a column for a week and all they did was the same thing I did, and they call that a rather release.
53:43
So to me, if you know what the requirements are, you might be able to continually keep adding, and creating your own custom relative time scenarios once you get the basics down of how they do it.
53:57
No.
53:58
And it’s basically, they have this huge spreadsheet with all the dates in the world, that time and then different slices of it, starting with the date and then prior date.
54:10
Next date last year, you figure that out. I think you could really customize it a lot.
54:17
Sounds good, right? You can sort of play off the ones that are there, and you’re just tweaking that syntax. So somebody asked about providing the link for the webinar. Reference that covers the creation of the data module, so you can pull that back down.
54:31
And I think the it was one of those three that we shared a few slides prior right, Pedro? that where you kind of go into how those are created?
54:41
Are you talking about a different presentation?
54:43
The different presentation. Yeah, I think that’s one of those.
54:46
Yeah, so go ahead and you can look.
54:48
Either you get that link or just do a search in our, on the resources tab for data modules and that should narrow it down sufficiently where you can find it.
54:59
Power data modules in the files within them refreshed.
55:03
So, as an attribute of data of a dataset, you could create a schedule.
55:10
Good question because once you create a data set and extracted from databases stale right away.
55:14
But generally, if you’re coming from data warehouse, the ETL runs every day, or once a week, depending on your refresh cycles, so you can sync that up. And you could have a schedule for each dataset to refresh itself, say, every morning at 6:00 AM.
55:27
You could also provide triggers a little more advanced.
55:30
But, you got there, the way to do a trigger default, out of the box, you can actually just put it on a schedule and have it run by itself, so you don’t have to worry about refreshing it.
55:41
Alright.
55:43
The other question here is: Is relative time available? An 11 1 7, And the answer is yes, It is. Yes, there’s a question of, what was the name of the webinar? That shows how to use relative Time.
55:59
And that was, not sure what the name of that one unless you did it, actually, isn’t it the data module architectures or how to successfully implement self-service? Might not be all that particular links there.
56:11
I think there’s another one out there for relative time, after you’ve got to send tourists Angular resources, and you can search for all the Cognos stuff. I’m pretty sure it’s still out there.
56:22
We’re out of time.
56:24
That one actually goes into how we actually build it, step by step.
56:29
Let me see if I’m just doing a quick search here.
56:31
Yeah, so, it’s called Using the Cognos KPI Capability and Relative Time structures.
56:37
That’s just search for relative time, and you’ll get that thing will pop right up then you can poke around it there.
56:43
Let’s see. How are and then how our datasets refresh? We already kind of answered that question, I think.
56:50
Do we need to have a JDBC connection to create the framework manager and data sources from that data sources to leverage data modules?
57:01
So, when you, when I build the datasets, I’m coming off a package.
57:06
Um.
57:08
So theoretically, if you got an old CQL package.
57:15
It should work, because I’m just querying the package.
57:18
Most package’s nowadays are I have a JDBC connection with the QM but whatever the package is set up for I point.
57:27
I say, I point to my data, my package, and I say create dataset.
57:33
And then it’ll basically create the dataset. Once you are in the data modular world with datasets, you’re completely in Dynamic Query mode.
57:40
And you don’t have to worry about any of that stuff, because you basically have not disconnected from the database.
57:46
Compare creating the summary metrics in the data module versus in database views.
57:54
Well, I don’t think there’s any comparison, at least from what I’ve seen, if you’re going to go against the data-based view.
58:01
You’re basically going across the latency of a network into the data base, running a query, waiting for the Oracle or SQL server to return the results come back across the network and then display it in Cognos.
58:16
When you’re using datasets and data modules, you’re not, you don’t have that latency anymore. You’re accessing a file on the Cognos server
58:24
and then, the cool thing about datasets once it’s access, once it goes into memory.
58:30
So, once it’s in memory, you’re no longer doing disk access. So, I don’t really think that’s comparison.
58:39
OK, is there any limit on how much volume of data that you can cache in a dataset for refresh? Yeah.
58:46
So, I always start off by saying, it’s not meant to bring over the billion-row table.
58:54
So, you definitely need to figure out some limitations there. What is the data you really need?
59:00
Don’t bring everything over from perspective of they’ll bring over all the columns that’s going to eat up your memory.
59:08
Figure out what you need.
59:09
Now, from a sizing perspective, I’ve, I’ve brought over tens of millions of rows in a dataset.
59:17
You do need to worry about a little about sizing on the memory.
59:21
If you got up for one gig Cognos machine, memory wise, that’s not going to work too well. The more memory, the better.
59:29
Tune up your data cache settings for data sets, you can actually create bigger memory footprint, so it’ll stay in memory longer.
59:36
So, yeah, those are all the things you got to consider, but the biggest thing you can say, I don’t bring over a billion rows is stuck on, it’s not meant to do that, is meant to solve a specific, high level summary, analytical question. If you need agility detail, you can drill the data back to the operational source.
59:53
Is there a way to store datasets in another location that’s off the cognitive server?
59:59
So, datasets by default are stored in the content store database, which, I think, as you’ve been getting using that, you definitely don’t want to do that.
1:00:07
When you change the configuration, it says in Cognos, you can set it so that datasets are stored on a file server a disk, and that could be anywhere, right?
1:00:17
But you do want to take into account any network latency as you cross and save the files, the dataset files, to a particular cerebral location.
1:00:28
And then, when is a dataset released from memory, after the report runs, or after a user session?
1:00:35
It’s round robin in nature, and depending on the footprint.
1:00:38
So I access a data set. It goes into memory.
1:00:42
Let people start access datasets fills up memory and out of memory as newer datasets come in.
1:00:51
That’s how that generally works.
1:00:53
Sounds good.
1:00:54
Well, that is the last of our questions. And it is one minute after 12. So I think we can put a bow on this and call it a day.
1:01:02
I’m going to go over the last slide. First of all, thank you to Pedro for another great session.
1:01:09
And thank you to all of you for taking time out of your busy schedules to spend a little time with us here at Senturus.
1:01:14
If we can be of any assistance with, anything, business, analytics related relative to any of the stuff I talked about, feel free to reach out to us at the 888-601-6010 if you actually still use a phone anymore. Otherwise, you can e-mail us at info@senturus.com or hit us up on the chat here. And we look forward to hearing from you and seeing you on one of our next Knowledge Series events. Thanks a lot, and have a great rest of your day. Bye, now.