This post is about how to make your log files being aggregated in a single place and easy searchable via a convenient web interface.
Power BI, like many Microsoft products, is multi-threaded. This can be seen from the logs and even the Task Manager. I know, I know…you’ve probably heard this part all before…
The importance of this information, is that the logs will display Process IDs, (PID) that are separate from the main Power BI Desktop executable, including the secondary processes.. Moving from the Power BI logs that reside in the Performance folder, (see Part I here) we can view and connect the PIDs and TID, (Transaction IDs) to information from the Task Manager and the data displayed:
So we went over locations and the basics of logging and tracing in Power BI. I now want to know how to make more sense from the data. In Oracle, we use a utility called TKProf, (along with others and a number of third party tools) to make sense of what comes from the logs. SQL Server has Log Analytics and the profiler, but what can I do with Power BI?
First, let’s discuss what happens when we have actual activity. In my first post, the system was pretty static. This time I chose to open up a file with larger data refreshes from multiple sources, added tables, calculated columns and measures. The one Access DB has over 10 million rows that is refreshed when I first open the PBIX file:
In my opinion, the volume of logging (aka redo log, aka xlog, aka WAL) is the most important factor for OLTP performance, availability and scalability, for several reasons:
In this post, I look at the volume of logging generated by some DML in Postgres and Oracle. I know Oracle quite well and just start to look at Postgres. The comparison here is not a contest but a way to better understand. For example, the default behavior of Postgres, with full_page_writes=on, is very similar to Oracle ‘begin backup’ mode. The comparison makes no sense for most of Postgres DBAs, but probably helps Oracle DBAs to understand it.
This blog post is about two things: one how you can monitor who is bringing you database up and down (there is a twist at the end!) and two how you can very conveniently do that with aggregated logs in a browser with a tool called ‘Kibana’, which is the K in ELK.
Recent comments
3 years 5 weeks ago
3 years 17 weeks ago
3 years 21 weeks ago
3 years 22 weeks ago
3 years 27 weeks ago
3 years 48 weeks ago
4 years 16 weeks ago
4 years 46 weeks ago
5 years 30 weeks ago
5 years 31 weeks ago