Cognos Analytics Performance Tuning Tips

Cognos Analytics has lots of moving pieces. A bottleneck anywhere in the software or hardware can create major issues. Slow running reports and system instability, particularly around memory, are not uncommon.

While there is no silver bullet that will fix everything, becoming familiar with Cognos configuration and setting options and knowing how to fine tune them will help you get the most out of the platform’s potential.

In this on-demand webinar, our in-house Cognos expert and installation practice lead, Todd Schuman shared lots of tips and tricks to improve the performance of your Cognos environment.

Topics covered in this high-performance webinar include

  • Hardware and server specifics
  • Failover and high availability
  • Interactive Performance Assistant
  • Improvements to dashboard performance
  • Data sets
  • Audit reporting
  • Non-Cognos related tuning

Presenter

Todd Schuman
Practice Lead – Installations, Upgrades and Performance Tuning
Senturus, Inc.

Todd has over 20 years of business analytics experience across multiple industries. He also regularly connects with the IBM product development team and has been in the forefront of the Cognos Analytics upgrade and installation process since it debuted.

Questions log

Q. Does Senturus provide advanced training for creating extensions for Cognos Analytics?
A. Senturus does not have a training class for this topic, but we have consultants who can help you with these requests. Please reach out if you would like to set up a time to review your needs.

Q. Where is the option to view the underlying tables structure in Cognos Analytics?
A. The tables need to be manually reviewed, using either SQL or the source Framework Manager  model or data module.

Q. How does security work with datasets in Cognos Analytics?
A. There is no security in the dataset. You need to create a data module on top to secure it. Here is a good reference to the process: Securing data – IBM Documentation

Q. Are the dataset files stored on a Cognos server like the physical files?
A. The datasets are stored in the Content Store database. When they are needed, they are written to disk, in the data folder. Then the datasets are  cleaned automatically when idle for a period of time. The file location and time duration can be modified.

Q. What is the difference between using the Cognos Analytics app from the dispatcher and the web server?
A. The main difference is the user experience. You can use single sign on to automatically log users in without prompting them. You can also set up multiple gateways to provide custom landing pages for different users. IIS or Apache provide additional benefits to enhance the web experience as well.

Q. Should we separate Content Manager and Application Tiers in Cognos Analytics?
A. As with many Cognos configurations, it depends! It won’t hurt anything to do this, the only downside might be the that separate Content Manager might be under used.

Q. Is it better to change the low affinity instead of the max process in Cognos Analytics?
A. You can do that, but we like to keep the existing ration of high/low and increase or decrease the max connections. This way you are still allowing the high affinity requests to have enough connections to do their job without impacting the low affinity requests.

Q. Is the IPA in 11.1.7 used in the Cognos demo?
A. Yes, this tool has been around for a few years now.

Q. What could be causing slower restarts after I conduct my quality assurance tests with dynamic cubes in Cognos 11.2.1?
A. It’s difficult to say. Dynamic cubes require a lot of monitoring and debugging. we would need to do a deep dive on your environment to say what the root cause is. If you’re interested, please contact us.

Machine transcript

0:11
Hello everyone and welcome to today’s Senturus Webinar on Cognitive Analytics performance tuning tips. Thanks for joining us today.

0:21
A quick overview of today’s agenda. We’ll do some introductions. Then we’ll talk about locating the source of poor performance in your Cognos environments. We’ll talk about report specific tuning as well as architecture and environment tuning. We’ll talk about how to tune your dispatchers. A little bit of an overview of Senturus and additional resources that we have available to you. And as I said, we’ll do some Q&A at the end.

0:47
Our presenter today is Todd Schuman. Todd is our practice lead for installations, upgrades and optimization here at Senturus. Todd has been in this business for a very long time. If you’ve worked with him, you know you’re in good hands with your Cognos environments. Now if you haven’t worked with him, you should reach out to us so you can get some time with Todd. Todd lives outside of Washington DC with his wife and two daughters and he also.

1:15
He works with the IBM team on a regular basis, the product team. So they keep him kind of up to date on what the current latest greatest features are and what’s coming. I’m Steve Reed Pittman. I’m director of Enterprise Architecture and engineering here at Senturus. I’m just here for the intros and outros today. Maybe a little bit of Q&A, but Todd is going to be the star of our show. Let’s do a couple of quick polls before we get started.

1:45
So the first poll is just what version of Cognos you’re using. So whether you’re Pre Cognos 10, which is pretty rare these days, but we do see it occasionally all the way up to the current 2X versions. So I’m going to go ahead and end the poll here and share the results so everybody can see that. So again, a little over half of you are already on one of the 11-2 versions, little under half are on one of the 11-1 versions and.

2:14
A few of you still on some of the older versions, so that’s I’m going to go ahead and stop that and let’s go on to the second poll here. So the second poll is about what are your current performance pain points. Now there are some common areas where we run into performance issues in Cognos. So do you have slow running reports?

2:37
We have failing reports, just an unstable system in general. Maybe you have dispatchers that crash on a regular basis or if or do you find Cognos administration just to be confusing or other and I’ll go ahead and close it out and share those results. So again, a lot of you have slow running reports.

3:01
But things are pretty well distributed across the other items. Some report failures and stability administration. So luckily for everybody today, Todd is going to touch a bit on all of these things. So with that, I’m going to go ahead and stop sharing that and I will hand it over to you, Todd. All right. Thank you, Steve, and thanks.

3:25
For everybody to for turning in today. I do have a lot to cover, so let’s get right into it. Before I get started, I do want to mention just Cognos itself is a very complex tool. A single bottleneck in the software or hardware can have a ripple effect to the entire system. In my experience, the most common bottlenecks are obviously the reports. This could be multiple reasons. It’s a poorly written report. It could be the query that’s driving the reports.

3:54
Maybe it’s bad SQL, a bad model. Sometimes the database itself needs tuning. The hardware user load for the server, Is it undersized? Oversized. Do you have enough CPU and RAM? Can your environment handle 40 concurrent users? If that’s what you need? Dispatcher settings? Do you have the right settings applied to your batch, your report and query services? And then finally software defects and bugs You can have all the above.

4:22
Tune perfectly and still be running into performance issues. And it could be related to a bug in the operating system or a bug in that version of Cognos. So we’ll touch on all those today. That outside, you know, I just wanted to throw out the most common issue is a, you know, a suboptimal model or a poorly authorized reporter. Dashboard is typically the number one culprit that we run into. So we’ll kind of go over some different issues that we encounter from time to time and.

4:51
Hopefully give you a good place to start and try to figure out you know where you need to address your time and effort to fix some of these performance issues. So the first step when troubleshooting is to find the source of the performance loss. So where do you begin? And to answer that, you kind of have to ask yourself some questions about the issue and I usually like to start with the reports, asking myself some questions like is this a single report or is it a single subject area or data Mart?

5:20
Are all the reports that hit say package A is slow, but reports against package B seem to be working fine? Maybe We have 20 different reports that hit package A and only one of them is running slowly. How is the environment itself performing? Are the users saying everything’s slow? Is it always slow? Maybe it slows certain times of the day, right around noon maybe things get really slow, Or towards month end close or year end close. The whole system really slows down. You need to kind of.

5:50
Answer these questions or give yourselves more insight to kind of know where to start because I was. As I mentioned, there’s so many different aspects to Cognos as to, you know, reporting, modeling, the environment, the database, lots of different areas could be the cause. So the more information you know, the easier it’s going to be to track down these issues. Okay. So let’s start with poor performing reports. I’m assuming that, you know, there’s a single report that people are complaining about. How can we troubleshoot that?

6:17
So the good news is we have some built in tools that can help us troubleshoot those reports. This is sort of a feature that’s been in the tool for a while, but just the way that it’s hidden, maybe people may not know about it. It’s called the Interactive Performance Analyzer or aka the IPA built right into the reporting tool. I’m not sure why IBM kind of buried it in the run as options. I’d prefer to see it more front and center in the tool, but.

6:45
I’m going to show you how to access it if you don’t know if you just go into Existing report and go into edit mode. The little drop down arrow next to the run button, there’s an option called Show Run Options. Click on that and there’s a second page of properties where you could enable disable a slider in there that has Include performance details. I kind of highlighted it there in the screenshot. Just turn that on, click OK and then run the report in HTML. This only works in HTML by the way, so you have to make sure you’re running it in that format.

7:15
But once the report has run, you should see something like this. This is the IPA output and hopefully you can see it. But if you look closely you can see there’s a little execution time window under each object on the page. You also get a total execution time for the page at the very bottom. I kind of highlighted those different areas. This can help you focus on where you need to look for possible issues, so if a specific object is running longer than you’d expect.

7:43
You could, you can focus right on that. So again, might be hard to see, but if you can look at this example on that top row camping equipment, it’s got the longest running execution time for that metrics image. You know that Red Square looks like it’s about four times more than the others. Taking the render here, it’s also got the longest running time for that bar chart visualization like it’s about two times as much. So this kind of tells me I should focus on camping equipment and look at the queries that are driving that there might be something there.

8:14
You know, maybe it’s more data, it could just be something with the query or the filters, but it’s a good place to start and it kind of just gives me a very easy way to see, you know, why is this report running and where is the piece of this report that’s that could be causing an issue. I can also look at the very bottom and see the page execution times. This example is just a single page report, but if there’s a multi page report that can help me kind of focus on which page I need to start with.

8:39
If you have 3 or 4 pages, you know, maybe the 1st 2 pages are fast, but the third one is the one that’s dragging and you’re sending this report out, you know in a PDF or an excel file where the whole thing has to get rendered. So that could be causing it to slow down. So different ways to kind of look at this information and make some smart decisions on where to go. Which leads us into now what. So you have the information on where the report issue exists, what do you want to do with it?

9:04
The first thing I do is look at the data or the query that was mapped to that specific object. So every object in Cognos your lists your crosstabs visualizations. They have to have a query associated with it and you can get that information either from the properties or you can click on the object itself and from the toolbar select Go to query. That’ll jump me right into the query itself. I usually start just running the tabular data, There’s a way to.

9:30
Right click and view a query and just say show me the raw data before it gets formatted into this list or cross tab visualization. However you’re presenting it to the users, just look at the raw data and take note of how quickly that query is coming back. If the query itself is running a long time before it even gets to the rendering of any visualization or objects, that’s usually going to be a red flag. If there’s something wrong with this query from then I usually try to grab the SQL. You can get the SQL again from the properties and.

10:00
It’s going to be a bit messy, so I’ll usually just use like a web SQL formatter. If you just Google, you know SQL formatting on the web, it should have at least five or ten different free ones. You can paste your Cognos SQL in there, which is usually pretty ugly, and it’ll kind of lay it out and make it much easier to read. Once you kind of have that and you can you can review it. I look for things in there that just seem off, you know, if you see full outer joins.

10:25
Multiple nested subqueries, unrelated stitch queries, unnecessary table joins, something that just doesn’t look right. It’s usually a good place to start. If you’re very familiar with the tables and joins that you’ve worked closely with, you know the back end. You should know right away if something goes off in that sequel. If you see a table that’s being referenced or fields that being referenced that have no business being in there, it’s typically an issue with the sequel. And again, if you’re writing reports off of packages, most likely there’s something in the package that’s going to need to be addressed.

10:55
If the queries look fine and run fine, next step is look at the calculations. You know are there any overly complex if then or K statements? Those are usually something I find a lot, you know, very complex multi level. If this, then do this then if that you know it’s just kind of crazy. See if there’s ways to simplify that or to possibly you know manipulate the data in a different way. If you’re using cross tabs or visualizations you know are you nesting too many fields or trying to display?

11:24
A large number of marks on a visualization. Those things are also, you know, very intensive and can cause performance issues as well. Try removing some of those complex fields or some additional nesting and rerun it and see if that makes a difference. If so, you can kind of identify, you know, here’s the culprit. I have to find a way to kind of redo this or rethink this in order to get this to work a little cleaner.

11:49
Beyond that, next steps, if you do find a query that’s running very slowly, what can you do about it? Is it bad SQL or is there something in the model that wasn’t done properly? It’s something that you personally can go and fix in the model. Sometimes, depending on your environment, you may have access to Framework Manager or the Data module and you can go and make changes yourself. Sometimes it’s a bit more controlled, there’s someone who owns the model and it has to go through a whole process.

12:16
Different organizations have different restrictions and things in place, but it’s something you can easily fix. You know, help. It’s joining on employee key and it should be joining on employee ID. That’s why it’s not running. That’s an easy fix for you to make. If it’s something more at the database level, you know it’s the query itself is clean, it looks good, it’s just runs really slowly on the database. Maybe it’s something you can fix yourself or you can request the DBA’s to.

12:42
Take some steps from there and you know create some indexes on the table itself. If it’s just a massive amount of you know daily you know multi level transactional data and you’re looking to get things rolled up to you know yearly or quarterly monthly maybe they can create some summary tables or materialized view to kind of stage that data so that Cognos doesn’t have to do all the heavy lifting. Any kind of you know transformations or aggregations that you can offload.

13:08
From Cognos, the database should definitely be done. I mean, Cognos isn’t going to be able to crunch numbers as well as the database can. If you can get to a point where Cognos is able to basically just be reading records directly and not have to be crunching and aggregating a lot of data, you’re going to see some significant performance gains just from that. The modeling this, I mean, this could be a week long topic on its own, but modeling can honestly be a big problem if the report is going off a model or package.

13:36
If you’re going to have to do some investigation in there and look and see if there’s anything strange going on, you know, typically bad joins. I mean, I make these mistakes sometimes I’ll just accidentally, you know, select the wrong key, not notice it until I’m running reports and I have to go back in and fix those. It’s easy to kind of make that mapping wrong. Cardinality is another big one. If you’re if you got the cardinality wrong, you know one to many is backwards or you’re doing outer joins. Those types of things can all kind of cause issues.

14:03
You know really high level stuff here. You know you want to avoid direct reporting directly off of an O LTP transactional systems. Those are not meant for reporting. They’re going to be very slow. They’re going to have normalization issues, star schemas. Obviously, if you’ve been working in the analytics area for any time at all, you most likely have heard of star schemas, and there’s a reason why they’ve been around for so long. It’s because they work really well. They’re going to give you the best performance, they’re going to make it much easier to model and analyze your data.

14:33
Assuming you can’t change any of that stuff, you can’t change the database. You can’t tune it, you can’t make any changes to the model. The report’s pretty well designed, there’s no real issues with that. It’s just the data is really slow. The next best option is data sets. This is a lot like a Tableau data extract, or a hyper file or a Power BI import if you’re familiar with those tools.

14:56
I’ve been working with Cognos for over 20 years and I in my opinion this was one of the best features to get added of all time. It can really save your bacon if you need to, you know, get a really slow report working and there’s not a lot of exterior things you can do such what this is going to do is it’s going to pull the data directly into Cognos. It’s going to be a snapshot in time. So if you’re reporting off a data warehouse that is refreshed nightly, you shouldn’t really have an issue with data freshness and that data is going to be updated.

15:25
Sometime in the middle of the night, early morning, you schedule your data set to refresh after that and it’s just as fresh as it could be going off live data. If you need it fresher, you can schedule specific times for the refresh or you can have triggers do that just like a typical schedule in Cognos. And the nice thing about it is the data is going to be summarized at the level of detail you provide. So if I bring in year.

15:50
And month is going to aggregate it at that level. If I bring in a day field, that’s going to aggregate at that level. So it’s going to basically use the data I drop into the data set and pre aggregate at that level. It’s also going to have it loaded up ready to go. I don’t have to wait for queries to run or if the data get passed back from the database. So it’s going to be extremely fast and should give you significant performance gains on a report. Making things even better, in 11.1.7 they introduced the Data Set editor.

16:20
Which essentially adds a reporting interface on top of the existing data set interface, which was very limited until this point. Before this you were basically limited to a single query. You had to basically just fit everything into a single data set where you had to build a report first that had all the fields you wanted and then drag that into data sets. So it’s kind of a pain to use earlier on. But with this new data set editor 11.1.7, you can now use multiple queries, so you can take two different queries of unrelated data.

16:48
Join them together. Have as many transformations as you need, including unions, custom SQL, load all that into the data set and then just save it and refresh the data. Or suck in the data and it’s going to do all that work for you. So it’s almost like a lightweight ETL tool built in Cognos. You can even take a complex report. This is one of my favorite things. Take a complex report, go to that final query that is actually driving that list across that whatever you have in your report.

17:15
Just copy and paste it directly into a dataset and then it’s going to bring over all the supporting queries that are needed. So OK, so that doesn’t make sense. You know in the image here, if you imagine query 7 is the one that’s driving, you know the object that you know I’m presenting to my users. I can just copy query 7, open up a dataset, paste that query 7 in and it’ll bring over queries one through 6 with those joins and all the information that’s in there.

17:44
Automatically. And I can just drop down the fields I want from query 7 into the data set, main page, save and load. And now that that data is now been pulled into Cognos, it’s going to be extremely fast and I can use that to kind of drive my reports. The performance should go, you know, through the roof. If you’ve got a really slow, performing report, you should see a significant performance gain, you know, with this technique.

18:07
I will throw out just a couple notes of caution about this. I mean overall it’s not, you know un quote best practice to create, you know, a one off solution for each report. I would use this sparingly in as sort as a last resort. Or if you’re going to use them, create data sets that address or contain information needed to drive several reports or answer multiple questions you know don’t get so specific that it only is useful for one report. Try to make it more useful or so that it’s a more general or generic so that.

18:37
You can build 20 reports or more, you know replace 20 reports that may be slow with the same one if possible. Other thing to note is that for reports a data module is required to interact with the data set. If you kind of click on those, the three dots, the next to an existing data set, you won’t see the option to create report. Fun fact, the dashboards actually do interact with data sets. I recently a couple years ago found out that.

19:05
It’s not because they can access the datasets correctly or access them directly. It Cognos actually creates a data module on the fly in the background. When you do this you just don’t see the process happening. So I thought it was pretty interesting. But back to the datasets. Once you create a data module that then pulls from the dataset, you need to remap your report to use the news source.

19:28
And if the report is simple, you can just manually open up the queries and the data items and just replace the old source with the new. If you have a large report or a complex report that you’re trying to move from an existing live package connection to a data set data module, you can try to use report XML. You can copy the clipboard and then do like a find or place and just kind of change things in bulk than that before and then finally a lot of reports.

19:57
You know correctly prefilter data downstream to kind of reduce the amount of rows. Since the data in the datasets is going to be pre aggregated, you might need to include those fields in the dataset even if they aren’t using the report output. So if you’re filtering on, you know a fiscal year just to limit the data, but you don’t have fiscal year in your report, you probably want to bring in those fields that are filtered on and make sure that they’re available in the final version of the report so you can be able to show and filter that different slice of the data.

20:26
So this whole thing could be a bit more complex than I’m getting into right now, but I just wanted to stress the speed and performance boost of datasets. If you have any questions about this, please reach out to Scott and the chat and we can set up some time offline to review in more detail. Moving on, let’s talk about poor performing environments.

20:45
We kind of touched on the reporting and some areas to focus on, but how do we troubleshoot the environment itself? What can we do to the environment assuming that the reports and the models are working as best they can? And again, it’s always a good idea to ask yourself some questions to kind of figure out where to begin. First thing is you know what does IBM recommend for CPU and RAM? Am I under or below those numbers? Is my CPU underutilized? Is it over utilized?

21:14
Are my VM’s set up correctly? Is the host hardware pretty current or is it running against some really old hardware? Are my VM’s sharing resources where they should be dedicated architecture? Am I OK running Cognos on a single server? Should I be using a clustered architecture? Do I need failover or a high availability environment? And then finally the dispatcher? Is my dispatcher tuned correctly? Am I allowing as many reports as it can handle without overcommitting?

21:45
So let’s get into some of the server related topics. This is a good example of what I mean when I say underutilized and overutilized. If you’re at the peak part of your day and you see the image on your left or something close to it means you have a lot of resources to spare and you can probably crank up the number of reports that can be run at one time. Or if you don’t need to do anything, reports are being run as needed. There’s nothing in a pending or waiting state.

22:11
And you could probably scale down the CPU you’re not using it and use those resources elsewhere. If you see something more like the right side, you’re over utilized and the server’s working too hard. So most likely things are getting locked up. Reports are queued but not able to finish because there’s just no resources to get the job done and more and more are getting backed up. I see that a lot. The goal is to get somewhere in between, preferably around the 75% mark where the server is taking advantage of.

22:39
All the resources but it’s not over utilized to a point where nothing is getting run. IBM is very good about updating the hardware and server specs online. They provide a new version for each release. You can Google your version and the supported word supported software and get this link here which is in the deck and it has all the information about the hardware as well as software and database compatibility. So always recommend as you upgrade or install check this.

23:07
Check this page and just make sure you’ve got the right specs in place to get started. I’m going to focus on 11.2.4 which is the latest version in the Long term Support release for this version the recommended quote UN quote starting specs are 4 cores and 32 gigs of ram. This is again not a one size fits all. I mean I do see clients with small Cognos shops run Cognos with less than this.

23:33
And I see big Cognos shops that have, you know, 10 times these specs on multiple servers. So if you’re just getting started, these are a good place to start. And then once you’re up and running and kind of have some users in there, you want to monitor and monitor that utilization and then decide, you know, do I need to scale up or scale down or am I, you know, in a good spot here. So again, just because this is what I’d recommend doesn’t necessarily mean this is going to be the perfect for you. So we’ll kind of get into a little bit of sizing in this.

24:02
But you may need to go through a more detailed sizing exercise to get that that perfect number. Virtual machines again pretty much the standard these days. Maybe 15-20 years ago there was a lot more physical servers but I rarely see them today, especially with all the cloud based growth. VM’s are great but they do have some areas that you need to be careful of. Specifically, and this is the biggest no I see with virtual machines is resource sharing.

24:31
This diagram here you know I’ve got on the top, you know my VM host, which is the physical server. It’s got a beefy, you know, 20 cores, 96 gigs of RAM, and that host has been split up into four VM’s. They all were allocated with about 8 CPUs and 32 gigs of RAM. Even though if you total up those specs, they exceed the total of the host. And that’s the thing you can do with virtual machines. Some  applications work well in this scenario with sharing resources, Cognos does not. It does not play well with others. You need to make sure.

25:01
Cognos resources are 100% dedicated. If you’re sharing resources and Cognos decides it, you know, has to run a big job and needs that RAM and it’s not available, bad things are going to happen. So I’ve seen what looks like really well built virtual machines. You know, I’m I get the VM, here’s the new server, Let’s take a look and see what’s wrong with it. The hardware looks great to me. I see 8 CPUs, 64 gigs of RAM. Things all look like they’re tuned really well.

25:27
And from the VM user, you know, I can’t tell what that host machine has, but after some back and forth with the VM administrators, we found out that that image itself was sitting on a very old host that had, you know, pretty dated, you know, disks and RAM and CPU. After some complaining, they finally moved the image to a newer rack and we saw the performance just take off. I mean, every issue we had was just instantly removed. It was blazing fast. So very difficult to tell what you’ve got.

25:55
From, you know the VM side. So if you do see things happening and performance doesn’t seem to be where it’s at, try to check the host and see if there’s anything going on there that you can kind of get tweaked a bit or move to a newer, host machine and it should help you out. Audit reports are going to be very helpful when trying to track down environmental performance issues. I’ve talked about audit reporting in prior webinars so I’m not going to get into the details or discuss how to set them up today, but.

26:24
If you need any help getting them set up, reach out to Scott in the chat and we can review that. But once you’ve got it set up and hopefully you’ve been logging it for some time, you’ll be able to get some really good information out of that data. For example, I can focus on user requests by the minute, the hour, the day, the month. I can kind of build reports to get that information and make it useful to me. I can focus on the reporting service and the Batch service and the Query service and tell me which ones are.

26:51
Running in the background which reports are being run interactively and just kind of look at how things progress throughout the day. As an administrator, I highly recommend creating some usage dashboards around this other data that focus on various time windows. So a daily, a weekly, and a monthly usage dashboard that you can e-mail it to yourself. You know at the end of the day or the end of the week are going to be very useful to help you stay proactive and be aware of, you know, unexpected usage spikes or errors that are coming up that can cause performance issues.

27:20
You could focus on metrics like the top ten, longest running reports per day or per week, what users are running the most reports, times of day that usage was really high or really low. Things like that are going to give you a good understanding of what your environment looks like and when it gets busy and when it’s slow. Who are the, you know, the problem child people who are, you know, hammering the system? Maybe someone’s is constantly dumping down, you know, a billion rows in Excel file.

27:46
Or whether reports are just constantly chewing up resources. Being able to kind of get that information on a daily, weekly basis is going to make it really easy for you to kind of know your environment and know where things need to be fixed. Using a similar dashboard shown here, I can easily see which days of the month had the most usage term and if it’s a specific report that is running in the background or if users are actively running reports, and then use that information to see what reports are being run that day. Maybe one or more is taking longer than they should.

28:14
I can use that information to kind of use those report tuning techniques that we covered earlier and hopefully address that. You know, if every day things slow down because someone runs this long running report, if I can use that a data set or some way to kind of speed that thing up, that should free up resources for other things and make the whole environment smoother. So lots of kind of ways to kind of address and review what’s going on and these audit reports are extremely valuable.

28:39
In addition to audit reporting, I also recommend grabbing some of your server stats. This can be done, you know, using tools like perfMon or Top or HTOP in Linux, and you can have it capture and write that to a disk. Then you can even load that into Cognos like file upload or put it into a table and blend that into your existing audit report data. And you can even sync up that you know with the report usage. As you can see, you know when things are causing the CPU to RAM the spike.

29:09
I’ve seen some really cool stuff around, you know, blending additional information with the audit package. So make sure you target some of the main Cognos processes here. You know, the job executable, cogbootstrap, the BIBus, you know it’s shown here. Grab that statistics around that and you’ll be able to kind of give yourself even more rich data. Kind of figure out where you need to look in times of day that possibly troubleshoot moving on into some architecture discussion.

29:40
This slide I think is about 500 years old, but it is still relevant. So Cognos, if you don’t know, consists of three main components, each of which has a specific function. There’s a gateway, a content manager, and a dispatcher. The gateway is going to be essentially a web server. It’s so somewhat optional, but highly recommended. It’s going to allow you to get features like single sign on and image browsing and give you different landing pages for your users.

30:06
The Content Manager is so-called brain of Cognos. It’s going to connect and read write to its own Cognos specific database. It’s going to authenticate your users against your, you know, active directories and LDAPS. And then the dispatchers are your workhorses. They’re going to be running reports, they connect to your reporting databases and generate the report outputs. So these can be placed on a single server or they can be distributed across multiple servers. A single server is easy to manage and in many cases might be all you need.

30:36
But we do have other options to distribute these that each have additional benefits. So distributed or a clustered environment allows you to do a few things beyond a single server, such as having multiple gateways and allowing you to stand up a load balancer to direct traffic to both. You can set up another content manager which access it’s called the Failover Content Manager. In the event that your content manager fails, it can actually just.

31:04
Hand off to the other one and that way your kind of experiment will stay up. And it also allows you to have multiple dispatchers to distribute report load and user load across multiple servers. So instead of scaling up, you can scale out horizontally and have two servers handling reports. If one server just doesn’t have any more resources, you can just stack up and add more dispatchers. So all these can have a significant impact on performance and stability.

31:33
And then taking things other step further, there’s something called high availability or an HA and essentially that’s the same thing as the distributed slide we just saw, But now we have it that there is no single point of failure. So in this architecture diagram, any single server in this cluster could crash or be brought down and the environment will continue to function as if nothing is wrong. So if your Cognos environment is considered, you know, mission critical, you need a really high up time. This is going to be architecture that you want to use.

32:02
And again like a distributed environment, we get the same benefits of load balancing and failover here as well and even more so with additional dispatchers. We’ve got I think 3 dispatchers in here. So you’re able to offset and allow that many more you know reports to be run concurrently by scaling out here okay time to get into dispatcher tuning. So dispatcher tuning is very.

32:32
It looks very complicated. The first time I saw this property screen, I almost had a panic attack. It’s got about 87 options last time I counted, so that’s the bad news. The good news is not as bad as it looks. I really wish they would redesign this page because it’s not. Like I said, it’s not as bad as it looks. It kind of breaks down into four basic types of options, and once we kind of touch on those, you should have a better understanding.

32:59
Of how these works and what you’re looking at here. So those four types are high and low affinity or the affinity requests, number of processes peak versus not peak and then the service itself. So high affinity examples are requests are things that you would expect to be very fast and snappy. For example, navigating the Cognos portal, clicking through folders when you want to report, clicking page up, page down on the report.

33:28
Viewing the saved output, these are things that you want to happen in less than a second. These are, these are going to be examples of, you know, high affinity related requests. Low affinity are things that you would expect to take a little bit longer than a second, even though it might not take that long. I have seen reports, you know, very fast reports get, you know, run in a second or two. But these are going to be things that it doesn’t necessarily take a long time, but you would expect that there’s some executions that have to happen.

33:57
Things have to get rendered. Running reports is the main one that comes to mind when multiple servers or services need to connect. But just Cognos sending a SQL request or connecting to a database, validating queries, these are all going to be things that are considered low affinity examples. So that’s high versus low. Then you’ve got the services itself, so Cognos has a lot of services that it runs.

34:24
Most of these have a line in a dispatcher setting for tuning. So you’ll see, you know, high affinity requests for the report service, high affinity requests for the batch service, high affinity, and then there’s a low and just on and on. Lots and lots of services that you can tune. But the good news is that you really only need to focus on three of them, reporting, batch and query, and we’ll go into the differences on those. Batch is going to be background jobs. These are your schedules and emails, triggers and saved outputs.

34:54
Anything that you are not necessarily watching the progress. On the opposite side of that is the report service. These are they could be the exact same reports as running in batch, but instead of having them emailed, they’re saved as an output. They’re being run what Cognos calls interactively. An example is a user clicks on a report, picks a couple prompts, and then hits run and they sit there and they’re staring at that spinning blue wheel waiting for that report to display.

35:23
That’s a report service request. Whereas if I’d say, you know, run that to my run in the background, send it to me an e-mail when I’m done, that’s going to make that same report run batch and then the query service is the newer, somewhat new. It’s the Dynamic Query Engine DQM engine that was introduced in Cognos 10. It’s a little different, It uses JDBC drivers, it’s got its own set of properties and it’s going to focus on JVM heaps and garbage collection and things that are unique to that query service and we’ll touch on that a little bit.

35:54
In the deck, one other you know thing to note is there’s the concept of peak versus not peak. What does that mean? So you’ll see in that setting there’s a place to enter the start and end time. These are going to be using a 24 hour military clock. The defaults are, you know, 7:00 AM start, 1800 hours, which is 6:00 PM end, and I recommend.

36:20
You know you could change that the hours, the starting and end hours to what you know makes your makes sense for your business. But the best thing to do is you know when people are in the office during those peak hours and the ones who are running and watching that blue wheel spin, you want to give them the priority during those peak hours. People don’t really notice so much if they say send this to my e-mail and like I’ll do something else. If it takes 5-10 minutes longer for that to run and they get their e-mail, they’re not going to complain. But if someone’s sitting there staring at that screen.

36:49
Waiting for it to run. Those are the ones that are going to call you up and say, Cognos is slow, what’s going on? So give the report service priority during peak hours. Once peak hours done, people leave the office. Let the bad services run wild, you know, let those schedules run. Less people are going to be in the environment running things interactively, and most of the settings are tuned to do this. But if you’ve changed them, this is kind of the best option I believe to handle that.

37:17
Routing rules, another feature that’s commonly overlooked these can be used to set up, can be set up to direct users to specific servers for various reasons. So if you are supporting CQM and DQM, or you have cubes but you want your service store in a 64 bit mode, you could set up routing rules that have specific packages get routed and run directly on specific dispatchers. I’ve seen customers who.

37:45
Have license based routing rules. They have an unlimited user, you know, PVU based server that 99% of the users you know get the requests run against. And then they’ve got some heavy data users, some power analysts who need a beefier machine that’s not tied to, you know, CPU and they’ve got name licenses and they go to a different machine. So you can have different users, types going to different servers. You can also have routing rules that focus on.

38:15
Batch focused dispatchers versus report service dispatcher. So if you do need to have things running in the background all day long, you can’t defer to the report service. You can have a separate dispatcher dedicated 100% to just running batch rules and that way that other server that people are using isn’t getting or isn’t losing resources so that that user doesn’t have to sit there and stare at that spinning wheel all day. So lots of different ways to kind of tune the dispatchers and then finally.

38:45
Before we get into the actual dispatcher tuning, there’s one more concept and that’s the you know the concurrent user count. Concurrent means you know how many reports or jobs can you run at the exact same time or concurrently. There is a sort of generic rule of thumb here, this 100:10:1 rule, which means if you have 100 named users, you should expect 10 active users which translate to 1 concurrent user. So along that line if you have 4000 named users.

39:12
You should size and plan for 40 concurrent users. Again, this is sort of similar to the sizing, you know, not one-size-fits-all. This isn’t a perfect algorithm here, but it can be I guess somewhat useful for, you know, just doing some initial sizing here. Okay. So once you’ve got all those concepts put together, you can kind of tie it all together here in the dispatcher settings.

39:41
You got to combine the affinity type you know the high or low with the service you want to focus on. You know batch report query your peak and non peak time and if you look at the rows of the dispatcher settings you’ll see you know this. This is a little snippet here one through three. Each of these have a combination so I grabbed just a little piece here. Number of high affinity connections for the report service for peak period, Number of low affinity connections for report service during peak period.

40:09
Max number of processes for report service during peak period. There’s the same exact 3 for non peak, there’s the same 3 for batch service, there’s the same 3 for query service. So they’re just very repetitive. I think it’d be a nice UI to kind of group these things a little better. You’ll have a whole subject there, a whole like section for just batch and just for a report with all these things are connected. The way they’re kind of organized in that big long screen is just kind of messy and hard to read.

40:40
Changing these settings I always say you know if Cognos ain’t broke, don’t try to fix it and change these. These are dangerous settings on top of I said one of the top issues I find working with clients is these have been incorrectly changed to a much higher number than it should be. You know it’s confusing enough as it is and this until the terminology that is covered. You know affinities, peaks, services, but there’s also some math in play here.

41:08
What I mean by that is you might think that the value on this slide here for that third row, the number of Max processes, Max number of processes would be how many jobs can or how many reports Cognos run with it set to two. That is actually not true. It’s going to actually allow 16 low and 4 high level high affinity processes, a total of 20 connections and that probably will run fine.

41:37
For, you know, a standard environment, but I’ve seen people change that two to an 8. So you’re not actually letting 8 reports run at once. You’re going to be allowing 64. It’s going to take that 8 and multiply up the number of low affinity connections you’ve got. Now you just open up the door to let 64 reports run at once, and unless you’ve got a really beefy machine, that’s probably going to crash your server. If 64 reports, do, run it once. So I’ve seen huge, you know, scheduled.

42:07
Companies with lots of schedules running where 64 can easily queue up quickly and the whole server after they start building up the whole server just comes crashing down and this 100% CPU utilization, nothing can run and more and more jobs keep stacking up and they have to reboot and they can’t seem to figure out why. So it’s very again, I think it’s very confusing the way these numbers work and how it’s set up. It took some trial and error to figure this out but you know don’t change this unless you know what you’re doing and I.

42:37
I did want to get to show a quick demo just to kind of show exactly what’s going on here. So let me switch to Cognos. I’ve got a job here with about 18 reports and in my dispatcher setting, let me just make this a little bigger. I’ve got the standard I’m looking at the high affinity, low affinity for batch, I think I’m in non peak, so where’s peak?

43:11
Down here so again you can see how confusing it is. They’re not even grouped together, but I’ve got 242. So right now it should let 4 low affinity connections and then I got two. So it should only let me run 8 total reports at once. So if I go to this job and try to run 17, kick that off, I’ll go to my status.

43:41
In timeout. OK, so here’s my job. I should only have 8 running right now, so got 123-4567 eight. So if you can see I’ve got eight are executing, the rest of them are in a pending status, and that’s something I’ve already succeeded. This is what you typically want, you know? So you don’t want Cognos to start running.

44:08
Reports if it doesn’t have the capability, so 8 is going to be. That’s the limit. Anything beyond eight, it goes to this pending status. If I refresh, we’ve all run now, but they kind of won by. As the queues open up, they’re going to move in. So that only let me run 8 at a time, which is again that happens to be a sweet spot for me. I can handle 8 at a time if I go back into my dispatcher settings and change that same value to.

44:41
Let’s make it 4 now. I can run 4 times 4 16. It’s at once. Hit OK and go back here and rerun this and then once this pops up you should see all the jobs should be running at once now.

45:11
The wrong one. You’re not taking it around here too. Make sure that one more time.

45:53
I’m not sure why it’s not working right now. Well, that’s the gist of it. Something I did wrong in the dispatcher settings, but I was working on it last night. It was working just fine. That’s the risk of a live demo. But essentially that’s what you want to do. You want to make sure those numbers add up and you’ll be able to see that this, total here times Affinity connections is going to give you the number that you want. So.

46:19
Just be careful when you increase these numbers that you’re not exceeding, you know what the server can handle. Otherwise you’re going to get those 100% utilizations that you know nothing gets done. And then finally the Java heap settings. These are going to be set up for different defaults. The default settings is you know initial size is 1024 and the limit is defaults to one 8192.

46:49
If you got at least 32 gigs of RAM and you are using dynamic query mode, I would bump these up. I usually set the initial to 4096 and then I set the limit Max to 16384. If you’re supporting a lot of users you maybe want to go higher than that. I would start with those and kind of monitor the JVM and see how it’s doing with those increased specs.

47:13
As far as garbage collection, if you see things are freezing or locking up around the query service, it might be related to garbage collection that can have that effect. There are some tuning options around that. It does get completed quickly. I did post a link to the DQM tuning guide that you can get here. It provides a lot of deep insights into how to troubleshoot and get the best performance I’ve done in query mode.

47:38
And then dynamic query, I mean, so dynamic cubes, don’t know if anybody out there is using them, They still exist. They do have a different set of rules. There’s a link here for that as well if you have questions or want to know some more information about how to tune those. And then some last minute quick tips and we’re running out of time here. Clean up your content store database. There are built in scripts in the tool so uploaded content database can really slow things down.

48:07
You can use these scripts to cut down the size of that database. Things like saved outputs of, you know, PDFs and excels can chew up gigs of space and may have been generated 10 years ago. I’ve seen places where someone was running you know, the same output with unlimited versions of 100 page PDF every day for multiple years. That can chew up a ton of space in your contents or database. So it’s very fast and easy to clean them up if you use some of these content removals and consistency checks and notification and things like that.

48:37
We also have a tool called Migration Assessment that does a deep dive on your content and creates an inventory which then can be used to target outdated reports and content. And if that sounds like something you might be interested in, reach out to Scott and the chat for some more information. Another quick win is a notification database. These are, I would say a must for environments that have heavy schedules.

49:01
By enabling this, you’re going to move several tables and a significant amount of size from the content store database into a new database. It also helps reduce the number of connections to the content store database by directing traffic to that separate database. I made this change to several clients who were heavy, you know, scheduled job users and there was a significant impact just doing that and just taking all that that extra stuff out of the content store and moving it somewhere else. So very easy to set up and makes a big difference.

49:30
The task manager or a sort of a did you know moment. You may already know this but you can go into task manager on the process tab and the details tab and right click the columns and include additional information, the image path name, the command line and this makes it a lot more usable. You can then sort or filter on the Cognos installation directory to see which ones are Java.

49:58
Processes that Cognos is running, what are the different Java options that are being run? You can monitor all the Cognos activity much easier than trying to scroll through that sort of long list that has things from Windows and other processes that may be living on that server. So quick easy way to kind of focus on tasks that are being run just by Cognos. Cognos SQL scripts, another dig you know moment Cognos has a bunch of SQL scripts in the installation directory that you can run against your database.

50:28
They have a folder for each database type, so there’s a folder for SQL Server, for Oracle, DB2, etc. And there’s useful scripts. Specifically, there’s one called C Size Profiling and give you a breakdown of object counts. You know how many reports, how many folders, how many saved outputs, how big are the saved outputs. Just a sort of a profile of your content store database. You don’t have to go out and buy any special tools, it’s just right there. Just drop it into a SQL editor and point it to your content store and run it.

50:56
Very helpful. We’re trying to clean up your content store database and then this one not necessarily performance related but I do like to throw it out there because people don’t necessarily know about it. If you ever lock yourself out of any of the admin screens, there’s a script called add system admin member in here which you can run and it will put the everyone role back into the admin group. So then you can re add yourself and remove the everyone role. So kind of get out of jail free card. If you mess up the security just run that script and it puts you back into the admin role.

51:26
And then finally, last but not least, if you have verified everything we kind of touched on today still have performance issues, it is possible that you’re experiencing A defect or a bug with the software. Make sure you check the fix lists often to see if the issue has already been found and logged as a known issue. There’s a good chance you may have already encountered something that’s been logged and there could be a fix pack or an interim fix that has already opted that you can just apply and fix.

51:53
If not, reach out to IBM support, have them investigate and possibly open up a new ticket so it gets resolved in the next release. And if you still need help, you can reach out to Senturus. All right, Thank you, Todd, Steve Reed Pittman. Again, just a bit of quick housekeeping before we go into our Q&A session. I know we’re running low on time, so I’ll go through this quickly.

52:18
If you are looking for help improving performance in your Cognos environments, we offer multiple services to help you out. With that, we offer mentoring, health checks and assessments. We also offer training.

52:30
Related to Cognos performance tuning, you can also just get Todd to come in and be hands on and fix your environments for you. So for all of those things, I reach out to Scott Felton through the link he’s been posting in chat. Scott would be happy to talk to you and get things set up so we can help you out with your performance challenges.

52:51
A quick bit about Senturus. We do have additional resources on our website visit us any time senturus.com/resources. We’ve got tech tips, we’ve got past slide decks, current slide decks, blog posts, a little bit of everything to keep you up to date on the latest and greatest in the Cognos world.

53:11
We’ve got a few upcoming events I wanted to share with you. We’ve got a chat with Pat hosted by our own Pat Powers that will be on building good visualizations at Tableau coming up here in a couple of weeks. We have a webinar on using multiple BI tools with a universal semantic layer, one semantic layer in the cloud and being able to use that across, for example, Power BI, Tableau and Cognos.

53:38
We have another chat with Pat coming up in June on building data modules in Cognos, so we hope you’ll join us for one or more of those, go to senturus.com/events to register. Here at Senturus we specialize in BI modernizations and migrations across the entire stack. We shine in hybrid environments. We encounter a lot of customers now who have not only Cognos but also Power BI and or Tableau in their shops.

54:07
And we are happy to help with your BI needs across all of those products. We’ve been in business for 22 years, over 1400 clients, thousands of projects. We’re a dedicated team. A lot of us have been around for quite a long time in this field. So we’re a boutique firm. We’re small enough to provide personal assistance, but also big enough to meet all of your BI needs.

54:33
We are hiring. We’re looking for a senior Microsoft BI Consultant, so if that happens to fit your skill set and you’re interested in possibly coming to work with us, send your resume over to us at jobs@Senturus.com. You can also find more information on our website.

54:51
And with that, let’s take these last few minutes to dive into some questions. Todd, as usual, there’s been a lot in the Q&A panel as we’ve gone along. I don’t know if you’ve had a chance to look at those, Todd, or if you want me to just kind of cover, you know, I can pull out a few of the critical or not critical, but the most common questions that came up and give them to you that, yeah, I’m just looking through them now. I can go through a couple.

55:16
I’ve got queued up here. First one was about the installation of fixed pack 11124. Fixed pack one. Can you go on top of 1120 or do I have to do 112 first? And FP1 so two answers to this first of that you know any fixed pack in Cognos 11 world is a full build. You don’t have to install the GA release like you did in Cognos 10. We had to, you know install that then download fixed packs on top. Every version is a full release. That said, I think fixed pack one.

55:47
Was released a month or so ago but there was a small issue with certificates so that has now been rolled back and it’s now I think an IF one. And I think if you even try to download the original version of 1124 it’s now 1124.1 automatically so it’s it should only be one version out there if you try to download it now. So the answer is yes, you can just download it over the top if you want to install the first version at all or you can do a fresh install.

56:15
Into Just make sure you get the right version of that. Let’s see, can you join different datasets in modules and if so, how it’s performance? Yes you can and it works really well. I saw a really cool article somewhere online of someone who kind of made almost like a like an ETL system all with datasets loading everything into memory and then joining up different datasets through a data module.

56:44
And just kind of posting some benchmarks, I’ll see if I can find that and share it when I post the questions. But yes, you can join different datasets in a data module and it works really well and I would recommend it if that’s something that you have an appetite for. Dataset file, is it stored on Cognos server or in physical files? So it’s a bit confusing, I believe it does. It stores it in the content store database.

57:12
And then when you go and access it, it brings it down on disk to the server. And then once you’re done with it and it hasn’t been used in X amount of time, there’s a clean up process that will remove the physical file. But it always lives in the content store database. So that will never, you know, never be something that can actually get deleted. Even if it you know does get cleaned up, it’s able to write it back down to disk and then access it very quickly.

57:39
Let’s see what are the implications that have two dispatchers linked and to have one primary and other running standby with different infrastructure. I’m not sure I understand the question there, but you know there’s obviously benefits to having a failover content manager. You just like I mentioned in the those slides, you know if that content manager does go down there’s something wrong with the server or who knows why it goes down, you know that secondary one will automatically take over become active.

58:08
And your Cognos department won’t go down. Users are still able to access it and run reports. If that’s not what you’re asking, feel free to drop another question there and I can answer that. Benefits of breaking up the Content manager to the dispatcher? It depends. I’ve seen people who have dedicated content managers for various reasons. It may make the content manager a little bit more stable if it doesn’t have to do any.

58:36
You know, report execution. The dispatchers aren’t, you know, chewing up any content manager resources. So you may see some benefit from that. It doesn’t necessarily have to happen. Like it just kind of depends on, you know, how busy your system is, how busy the content manager. You have to kind of do a little bit more of a deeper dive into that and determine that. But like I said, you may see some benefits just making it hundreds of dedicated to content manager related tasks. Lots of questions about, you know, separating those two different tiers.

59:06
Yeah. So again reach out offline. I’ll try to just touch high level in the Q&A, but we’re running out of time here. Does kind of support dual for multifactor? Yes, it should IPA, yes, it’s HTML format only i believe someone in here did have a good right. It’s also available in dashboarding you can get.

59:33
Performance details in PDF for port executions as well, so that must be a new feature I wasn’t even aware of, so thank you Danny for sharing that. Yeah. Hey Todd, Jeannie had a question earlier about not being able to see IPA as an option and I think it was an 1117 and I’m trying to remember. Are there special permissions that need to be assigned to users or is it only? It seems like there’s it’s only visible in certain cases but I just can’t remember what the.

59:59
I wasn’t aware of anything special. I mean obviously it to be in the editor, you can’t just, you know, get it from the, you know, from the portal. But I’ll have to check and see it. I don’t think it’s tied to, you know, DQM only, but it maybe it is, I’ll have to double check and see. But I don’t think I’ve seen it not there and I can reach out to Jeannie offline and take a look at that if it’s not working.

1:00:25
But it is only available through the authoring interface, right? That’s correct. Yeah, like you can’t just do it from a direct report run in the portal, Okay. Yes, it’s for editors only. I got the most of these. Is it possible to create a data set from a TM1 cube? I haven’t tried that. I had to test that and see I don’t have a TM1 cube easily accessible, but I don’t.

1:00:54
See why it wouldn’t be, but I’ll have to double check on that. I know we’re kind of over the hour here. Still a bunch of questions. Yeah, I will try to get all these logged and we always do post a link to the Q&A that we get to. So if I didn’t answer your questions or someone didn’t answer your question today, we will get them answered and posted up on the website along with the deck and the recording. So apologize for.

1:01:22
Going so long, not getting your questions, but we’ll we will answer them and get them uploaded. So just check back in a couple of days and hopefully we’ll have all that for you. So thank you all and appreciate you tuning in and put any last minute questions in and we’ll get to them as soon as we can. Thank you. All right. Thanks Todd for all the great information today and thank you everybody for joining us. If you have further questions, reach out to us at senturus.com.

1:01:49
You can also get Scott at his link there in the chat window and we hope to have you join us again at a future Senturus webinar. So everybody have a great day and thanks again for being here.

Connect with Senturus

Sign up to be notified about our upcoming events

Back to top