|
|
@@ -5,7 +5,7 @@ Application Level Tracing library
|
|
|
Overview
|
|
|
--------
|
|
|
|
|
|
-IDF provides a useful feature for program behavior analysis called **Application Level Tracing**. The feature can be enabled in menuconfig and allows transfer of arbitrary data between the host and {IDF_TARGET_NAME} via JTAG interface with minimal overhead on program execution.
|
|
|
+IDF provides useful feature for program behavior analysis: application level tracing. It is implemented in the corresponding library and can be enabled in menuconfig. This feature allows to transfer arbitrary data between host and {IDF_TARGET_NAME} via JTAG, UART or USB interfaces with small overhead on program execution. It is possible to use JTAG and UART interfaces simultaneously. The UART interface are mostly used for connection with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_).
|
|
|
|
|
|
Developers can use this library to send application specific state of execution to the host and receive commands or other type of info in the opposite direction at runtime. The main use cases of this library are:
|
|
|
|
|
|
@@ -29,9 +29,10 @@ Modes of Operation
|
|
|
|
|
|
The library supports two modes of operation:
|
|
|
|
|
|
-**Post-mortem mode**. This is the default mode. The mode does not need interaction with the host side. In this mode, the tracing module does not check whether host has read all the data from the *HW UP BUFFER* and overwrites it with new data. This mode is useful when only the latest trace data is interesting to the user, e.g. for analyzing the program's behavior just before a crash. Host can read the data later upon user request, e.g. via a special OpenOCD command in case of working via JTAG interface.
|
|
|
+**Post-mortem mode**. This is the default mode. The mode does not need interaction with the host side. In this mode tracing module does not check whether host has read all the data from *HW UP BUFFER* buffer and overwrites old data with the new ones. This mode is useful when only the latest trace data are interesting to the user, e.g. for analyzing program's behavior just before the crash. Host can read the data later on upon user request, e.g. via special OpenOCD command in case of working via JTAG interface.
|
|
|
+
|
|
|
+**Streaming mode.** Tracing module enters this mode when host connects to {IDF_TARGET_NAME}. In this mode, before writing new data to *HW UP BUFFER*, the tracing module checks that whether there is enough space in it and if necessary, waits for the host to read data and free enough memory. Maximum waiting time is controlled via timeout values passed by users to corresponding API routines. So when application tries to write data to the trace buffer using finite value of the maximum waiting time, it is possible situation that this data will be dropped. This is especially true for tracing from time critical code (ISRs, OS scheduler code, etc.) when infinite timeouts can lead to system malfunction. In order to avoid loss of such critical data, developers can enable additional data buffering via menuconfig option :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX`. This macro specifies the size of data which can be buffered in above conditions. The option can also help to overcome situation when data transfer to the host is temporarily slowed down, e.g. due to USB bus congestions. But it will not help when the average bitrate of the trace data stream exceeds the hardware interface capabilities.
|
|
|
|
|
|
-**Streaming mode.** Tracing module enters this mode when host connects to {IDF_TARGET_NAME}. In this mode, before writing new data to *HW UP BUFFER*, the tracing module checks that there is enough space in it and if necessary waits for the host to read data and free enough memory. Maximum waiting time is controlled via timeout values passed by users to corresponding API routines. When an application tries to write data to the trace buffer with a finite wait time, it is possible that the data will be dropped. This is especially true when tracing from time critical code (ISRs, OS scheduler code etc.) where infinite timeouts can lead to a system malfunction. In order to avoid loss of such critical data, developers can enable additional data buffering via menuconfig option :ref:`CONFIG_APPTRACE_PENDING_DATA_SIZE_MAX`. This macro specifies the size of data which can be buffered in such scenarios. This option can also help to overcome a situation when data transfer to the host is temporarily slowed down, e.g. due to USB bus congestions, etc. But it will not help when the average bitrate of the trace data stream exceeds the HW interface capabilities.
|
|
|
|
|
|
Configuration Options and Dependencies
|
|
|
--------------------------------------
|
|
|
@@ -40,31 +41,37 @@ Using of this feature depends on two components:
|
|
|
|
|
|
1. **Host side:** Application tracing is done over JTAG, so it needs OpenOCD to be set up and running on host machine. For instructions on how to set it up, please see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>` for details.
|
|
|
|
|
|
-2. **Target side:** Application tracing functionality can be enabled in menuconfig. *Component config > Application Level Tracing* menu allows selecting destination for the trace data (HW interface for transport). Choosing any of the destinations automatically enables ``CONFIG_APPTRACE_ENABLE`` option.
|
|
|
+2. **Target side:** Application tracing functionality can be enabled in menuconfig. *Component config > Application Level Tracing* menu allows selecting
|
|
|
+destination for the trace data (HW interface for transport: JTAG or/and UART). Choosing any of the destinations automatically enables ``CONFIG_APPTRACE_ENABLE`` option.
|
|
|
+For UART interface user have to define baud rate, TX and RX pins numbers, and additional UART related parameters.
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
- In order to achieve higher data rates and minimize the number of dropped packets, it is recommended to optimize the setting of the JTAG clock frequency, so it is at maximum and still provides stable operation of JTAG, see :ref:`jtag-debugging-tip-optimize-jtag-speed`.
|
|
|
+ In order to achieve higher data rates and minimize number of dropped packets it is recommended to optimize setting of JTAG clock frequency, so it is at maximum and still provides stable operation of JTAG, see :ref:`jtag-debugging-tip-optimize-jtag-speed`.
|
|
|
+
|
|
|
+There are two additional menuconfig options not mentioned above:
|
|
|
|
|
|
-There are two additional menuconfig options that are available to users:
|
|
|
+1. *Threshold for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`). This option is necessary due to the nature of working over JTAG. In that mode trace data are exposed to the host in 16 KB blocks. In post-mortem mode when one block is filled it is exposed to the host and the previous one becomes unavailable. In other words trace data are overwritten in 16 KB granularity. On panic the latest data from the current input block are exposed to host and host can read them for post-analysis. System panic may occur when very small amount of data are not exposed to the host yet. In this case the previous 16 KB of collected data will be lost and host will see the latest, but very small piece of the trace. It can be insufficient to diagnose the problem. This menuconfig option allows avoiding such situations. It controls the threshold for flushing data in case of panic. For example user can decide that it needs not less then 512 bytes of the recent trace data, so if there is less then 512 bytes of pending data at the moment of panic they will not be flushed and will not overwrite previous 16 KB. The option is only meaningful in post-mortem mode and when working over JTAG.
|
|
|
|
|
|
-1. *Threshold for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_POSTMORTEM_FLUSH_THRESH`). This option is useful when working over JTAG wherein the trace data is exposed to the host in 16 KB blocks. In post-mortem mode, when one block is filled it is exposed to the host and the previous one becomes unavailable. In other words, the trace data is overwritten in 16 KB granularity. On panic, the latest data from the current input block is exposed to the host and the host can read it for post-analysis. System panic may occur when a very small amount of data has been accumulated but not yet exposed to the host. In this case the previous 16 KB of collected data will be lost and the host will see the latest, very small piece of the trace. This data may be insufficient to diagnose the problem. Thus, this menuconfig option allows avoiding such situations. It controls the threshold for flushing data in case of a panic. For example, users can decide that they need no less than 512 bytes of the recent trace data for meaningful analysis. If there is less than 512 bytes of pending data at the moment of panic, it will not be flushed and will not overwrite the previous 16 KB. The option is only meaningful in post-mortem mode and when working over JTAG.
|
|
|
2. *Timeout for flushing last trace data to host on panic* (:ref:`CONFIG_APPTRACE_ONPANIC_HOST_FLUSH_TMO`). The option is only meaningful in streaming mode and controls the maximum time tracing module will wait for the host to read the last data in case of panic.
|
|
|
|
|
|
+3. *UART RX/TX ring buffer size* (:ref:`CONFIG_APPTRACE_UART_TX_BUFF_SIZE`). The size of the buffer depends on amount of data transfered through the UART.
|
|
|
+
|
|
|
+4. *UART TX message size* (:ref:`CONFIG_APPTRACE_UART_TX_MSG_SIZE`). The maximum size of the single message to transfer.
|
|
|
|
|
|
How to use this library
|
|
|
-----------------------
|
|
|
|
|
|
-This library provides APIs for transferring arbitrary data between the host and {IDF_TARGET_NAME}. When enabled in menuconfig, target application tracing module is initialized automatically at the system startup. All that the user needs to do is to call corresponding APIs to send, receive or flush the data.
|
|
|
+This library provides API for transferring arbitrary data between host and {IDF_TARGET_NAME}. When enabled in menuconfig target application tracing module is initialized automatically at the system startup, so all what the user needs to do is to call corresponding API to send, receive or flush the data.
|
|
|
|
|
|
.. _app_trace-application-specific-tracing:
|
|
|
|
|
|
Application Specific Tracing
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
-In general the user should decide what type of data should be transferred in either direction and how this data must be interpreted (processed). The following steps must be performed to transfer data between {IDF_TARGET_NAME} and the host:
|
|
|
+In general user should decide what type of data should be transferred in every direction and how these data must be interpreted (processed). The following steps must be performed to transfer data between target and host:
|
|
|
|
|
|
-1. On target side user should implement algorithms for writing trace data to the host. The following piece of code demonstrates an example of how to do this.
|
|
|
+1. On target side user should implement algorithms for writing trace data to the host. Piece of code below shows an example how to do this.
|
|
|
|
|
|
.. code-block:: c
|
|
|
|
|
|
@@ -97,7 +104,7 @@ In general the user should decide what type of data should be transferred in eit
|
|
|
return res;
|
|
|
}
|
|
|
|
|
|
- The user may also want to receive data from the host. The following piece of code shows an example of how to do this.
|
|
|
+ Also according to his needs user may want to receive data from the host. Piece of code below shows an example how to do this.
|
|
|
|
|
|
.. code-block:: c
|
|
|
|
|
|
@@ -160,13 +167,13 @@ In general the user should decide what type of data should be transferred in eit
|
|
|
OpenOCD Application Level Tracing Commands
|
|
|
""""""""""""""""""""""""""""""""""""""""""
|
|
|
|
|
|
-*HW UP BUFFER* is shared between user data blocks and filling of the allocated memory is performed on behalf of the API caller (in task or ISR context). In a multithreaded environment, it can happen that the task/ISR which fills the buffer is preempted by another high priority task/ISR. It is possible that the user data preparation process is not complete when that chunk is read by the host. To handle such scenarios, the tracing module prepends all user data chunks with a header that contains allocated user buffer size (2 bytes) and the length of the actual written data (2 bytes). So total length of the header is 4 bytes. OpenOCD command which reads trace data reports an error when it reads an incomplete user data chunk. In any case, it puts the contents of the whole user chunk (including unfilled area) to the output file.
|
|
|
+*HW UP BUFFER* is shared between user data blocks and filling of the allocated memory is performed on behalf of the API caller (in task or ISR context). In multithreading environment it can happen that task/ISR which fills the buffer is preempted by another high priority task/ISR. So it is possible situation that user data preparation process is not completed at the moment when that chunk is read by the host. To handle such conditions tracing module prepends all user data chunks with header which contains allocated user buffer size (2 bytes) and length of actually written data (2 bytes). So total length of the header is 4 bytes. OpenOCD command which reads trace data reports error when it reads incomplete user data chunk, but in any case it puts contents of the whole user chunk (including unfilled area) to output file.
|
|
|
|
|
|
Below is the description of available OpenOCD application tracing commands.
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
- Currently, OpenOCD does not provide commands to send arbitrary user data to the target.
|
|
|
+ Currently OpenOCD does not provide commands to send arbitrary user data to the target.
|
|
|
|
|
|
|
|
|
Command usage:
|
|
|
@@ -192,15 +199,15 @@ Start command syntax:
|
|
|
``outfile``
|
|
|
Path to file to save data from both CPUs. This argument should have the following format: ``file://path/to/file``.
|
|
|
``poll_period``
|
|
|
- Data polling period (in ms) for available trace data. If greater than 0 then command runs in non-blocking mode. By default, 1 ms.
|
|
|
+ Data polling period (in ms) for available trace data. If greater than 0 then command runs in non-blocking mode. By default 1 ms.
|
|
|
``trace_size``
|
|
|
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default -1 (trace size stop trigger is disabled).
|
|
|
``stop_tmo``
|
|
|
- Idle timeout (in sec). Tracing is stopped if there is no data for a specified period of time. By default -1 (disable this stop trigger). Optionally set it to a value longer than the longest pause between tracing commands from the target.
|
|
|
+ Idle timeout (in sec). Tracing is stopped if there is no data for specified period of time. By default -1 (disable this stop trigger). Optionally set it to value longer than longest pause between tracing commands from target.
|
|
|
``wait4halt``
|
|
|
- If 0 start tracing immediately, otherwise command waits for the target to be halted (after reset, by breakpoint etc.) and then automatically resumes it and starts tracing. By default, 0.
|
|
|
+ If 0 start tracing immediately, otherwise command waits for the target to be halted (after reset, by breakpoint etc.) and then automatically resumes it and starts tracing. By default 0.
|
|
|
``skip_size``
|
|
|
- Number of bytes to skip at the start. By default, 0.
|
|
|
+ Number of bytes to skip at the start. By default 0.
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
@@ -220,7 +227,7 @@ Command usage examples:
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
- Tracing data is buffered before it is made available to OpenOCD. If you see a "Data timeout!" message, it is likely that the target is not sending enough data to empty the buffer to OpenOCD before the timeout. Either increase the timeout or use a function ``esp_apptrace_flush()`` to flush the data on specific intervals.
|
|
|
+ Tracing data is buffered before it is made available to OpenOCD. If you see "Data timeout!" message, then the target is likely sending not enough data to empty the buffer to OpenOCD before expiration of timeout. Either increase the timeout or use a function ``esp_apptrace_flush()`` to flush the data on specific intervals.
|
|
|
|
|
|
2. Retrieve tracing data indefinitely in non-blocking mode.
|
|
|
|
|
|
@@ -228,7 +235,7 @@ Command usage examples:
|
|
|
|
|
|
esp apptrace start file://trace.log 1 -1 -1 0 0
|
|
|
|
|
|
- There is no limitation on the size of collected data and there is no data timeout set. This process may be stopped by issuing ``esp apptrace stop`` command on OpenOCD telnet prompt, or by pressing Ctrl+C in OpenOCD window.
|
|
|
+ There is no limitation on the size of collected data and there is no any data timeout set. This process may be stopped by issuing ``esp apptrace stop`` command on OpenOCD telnet prompt, or by pressing Ctrl+C in OpenOCD window.
|
|
|
|
|
|
3. Retrieve tracing data and save them indefinitely.
|
|
|
|
|
|
@@ -252,19 +259,20 @@ Command usage examples:
|
|
|
Logging to Host
|
|
|
^^^^^^^^^^^^^^^
|
|
|
|
|
|
-IDF implements a useful feature: logging to host via application level tracing library. This is a kind of semihosting when all `ESP_LOGx` calls are redirected to the host instead of UART. This can be useful because "printing to host" eliminates some steps performed when logging to UART. Most part of the work is done on the host.
|
|
|
+IDF implements useful feature: logging to host via application level tracing library. This is a kind of semihosting when all `ESP_LOGx` calls send strings to be printed to the host instead of UART. This can be useful because "printing to host" eliminates some steps performed when logging to UART. The most part of work is done on the host.
|
|
|
|
|
|
-By default, IDF's logging library uses a vprintf-like function to write formatted output to dedicated UART. In general, it involves the following steps:
|
|
|
+By default IDF's logging library uses vprintf-like function to write formatted output to dedicated UART. In general it involves the following steps:
|
|
|
|
|
|
1. Format string is parsed to obtain type of each argument.
|
|
|
2. According to its type every argument is converted to string representation.
|
|
|
3. Format string combined with converted arguments is sent to UART.
|
|
|
|
|
|
-Though the implementation of a vprintf-like function can be optimized to a certain level, all steps above have to be performed in any case and every step takes some time (especially item 3). Hence, it is quite common to observe that with additional logging added to a program for debugging, the program behavior changes and the problem is not reproduced. In the worst case, the program may not work normally at all and ends up with an error or even hangs.
|
|
|
+Though implementation of vprintf-like function can be optimized to a certain level, all steps above have to be performed in any case and every step takes some time (especially item 3). So it frequently occurs that with additional log added to the program to identify the problem, the program behavior is changed and the problem cannot be reproduced or in the worst cases the program cannot work normally at all and ends up with an error or even hangs.
|
|
|
|
|
|
Possible ways to overcome this problem are to use higher UART bitrates (or another faster interface) and/or move string formatting procedure to the host.
|
|
|
|
|
|
-Application level tracing feature can be used to transfer log information to the host using ``esp_apptrace_vprintf`` function. This function does not perform full parsing of the format string and arguments, and instead just calculates the number of arguments passed and sends them along with the format string address to the host. On the host, log data are processed and printed out by a special Python script.
|
|
|
+Application level tracing feature can be used to transfer log information to host using ``esp_apptrace_vprintf`` function. This function does not perform full parsing of the format string and arguments, instead it just calculates number of arguments passed and sends them along with the format string address to the host. On the host log data are processed and printed out by a special Python script.
|
|
|
+
|
|
|
|
|
|
Limitations
|
|
|
"""""""""""
|
|
|
@@ -282,7 +290,7 @@ How To Use It
|
|
|
|
|
|
In order to use logging via trace module user needs to perform the following steps:
|
|
|
|
|
|
-1. On target side, the user must use the :cpp:func:`esp_apptrace_vprintf` function to send log data to the host. Example code is provided in :example:`system/app_trace_to_host`.
|
|
|
+1. On target side special vprintf-like function needs to be installed. As it was mentioned earlier this function is ``esp_apptrace_vprintf``. It sends log data to the host. Example code is provided in :example:`system/app_trace_to_host`.
|
|
|
2. Follow instructions in items 2-5 in `Application Specific Tracing`_.
|
|
|
3. To print out collected log records, run the following command in terminal: ``$IDF_PATH/tools/esp_app_trace/logtrace_proc.py /path/to/trace/file /path/to/program/elf/file``.
|
|
|
|
|
|
@@ -313,20 +321,21 @@ Optional arguments:
|
|
|
System Behavior Analysis with SEGGER SystemView
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
|
-Another useful IDF feature built on top of application tracing library is the system level tracing which produces traces compatible with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_). SEGGER SystemView is a real-time recording and visualization tool that allows to analyze runtime behavior of an application.
|
|
|
-
|
|
|
-.. note::
|
|
|
-
|
|
|
- Currently, IDF based applications are able to generate SystemView compatible traces in form of files to be opened in the SystemView application. The tracing process cannot yet be controlled using that tool.
|
|
|
+Another useful IDF feature built on top of application tracing library is the system level tracing which produces traces
|
|
|
+compatible with SEGGER SystemView tool (see `SystemView <https://www.segger.com/products/development-tools/systemview/>`_).
|
|
|
+SEGGER SystemView is a real-time recording and visualization tool that allows to analyze runtime behavior of an application.
|
|
|
+It is possible to view events in real-time through the UART interface.
|
|
|
|
|
|
|
|
|
How To Use It
|
|
|
"""""""""""""
|
|
|
|
|
|
-Support for this feature is enabled by *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* (:ref:`CONFIG_APPTRACE_SV_ENABLE`) menuconfig option. There are several other options enabled under the same menu:
|
|
|
+Support for this feature is enabled by *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* (:ref:`CONFIG_APPTRACE_SV_ENABLE`) menuconfig option. There are several other options enabled under the same menu:
|
|
|
|
|
|
-1. {IDF_TARGET_NAME} timer to use as SystemView timestamp source: (:ref:`CONFIG_APPTRACE_SV_TS_SOURCE`) selects the source of timestamps for SystemView events. In single core mode timestamps are generated using {IDF_TARGET_NAME} internal cycle counter running at maximum 240 Mhz (~4 ns granularity). In dual-core mode external timer working at 40 Mhz is used, so timestamp granularity is 25 ns.
|
|
|
-2. Individually enabled or disabled collection of SystemView events (``CONFIG_APPTRACE_SV_EVT_XXX``):
|
|
|
+1. SytemView destination. Select the destination interface: JTAG or UART. In case of UART
|
|
|
+it will be possible to connect SystemView application to the {IDF_TARGET_NAME} directly and receive data in real-time.
|
|
|
+2. {IDF_TARGET_NAME} timer to use as SystemView timestamp source: (:ref:`CONFIG_APPTRACE_SV_TS_SOURCE`) selects the source of timestamps for SystemView events. In single core mode timestamps are generated using {IDF_TARGET_NAME} internal cycle counter running at maximum 240 Mhz (~4 ns granularity). In dual-core mode external timer working at 40 Mhz is used, so timestamp granularity is 25 ns.
|
|
|
+3. Individually enabled or disabled collection of SystemView events (``CONFIG_APPTRACE_SV_EVT_XXX``):
|
|
|
|
|
|
- Trace Buffer Overflow Event
|
|
|
- ISR Enter Event
|
|
|
@@ -344,6 +353,8 @@ Support for this feature is enabled by *Component config > Application Level Tra
|
|
|
|
|
|
IDF has all the code required to produce SystemView compatible traces, so user can just configure necessary project options (see above), build, download the image to target and use OpenOCD to collect data as described in the previous sections.
|
|
|
|
|
|
+4. Select Pro or App CPU in menuconfig options *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* to trace over UART interface in real-time.
|
|
|
+
|
|
|
|
|
|
OpenOCD SystemView Tracing Command Options
|
|
|
""""""""""""""""""""""""""""""""""""""""""
|
|
|
@@ -370,7 +381,7 @@ Start command syntax:
|
|
|
``outfile2``
|
|
|
Path to file to save data from APP CPU. This argument should have the following format: ``file://path/to/file``.
|
|
|
``poll_period``
|
|
|
- Data polling period (in ms) for available trace data. If greater than 0 then command runs in non-blocking mode. By default, 1 ms.
|
|
|
+ Data polling period (in ms) for available trace data. If greater then 0 then command runs in non-blocking mode. By default 1 ms.
|
|
|
``trace_size``
|
|
|
Maximum size of data to collect (in bytes). Tracing is stopped after specified amount of data is received. By default -1 (trace size stop trigger is disabled).
|
|
|
``stop_tmo``
|
|
|
@@ -404,13 +415,15 @@ Command usage examples:
|
|
|
Data Visualization
|
|
|
""""""""""""""""""
|
|
|
|
|
|
-After trace data is collected, users can use a special tool to visualize the results and inspect behavior of the program.
|
|
|
+After trace data are collected user can use special tool to visualize the results and inspect behavior of the program.
|
|
|
|
|
|
.. only:: not CONFIG_FREERTOS_UNICORE
|
|
|
|
|
|
- Unfortunately SystemView does not support tracing from multiple cores. So when tracing from {IDF_TARGET_NAME} working in dual-core mode two files are generated: one for PRO CPU and another one for APP CPU. Users can load both files into separate instances of the tool.
|
|
|
+ Unfortunately SystemView does not support tracing from multiple cores. So when tracing from {IDF_TARGET_NAME} working with JTAG in dual-core mode two files are
|
|
|
+ generated: one for PRO CPU and another one for APP CPU. User can load every file into separate instance of the tool. For tracing over UART, user can select in
|
|
|
+ menuconfig Pro or App *Component config > Application Level Tracing > FreeRTOS SystemView Tracing* with CPU has to be traced.
|
|
|
|
|
|
-It is uneasy and awkward to analyze data for every core in separate instance of the tool. Fortunately there is an Eclipse plugin called *Impulse* which can load several trace files and makes it possible to inspect events from both cores in one view. Also, this plugin has no limitation of 1,000,000 events as compared to free version of SystemView.
|
|
|
+It is uneasy and awkward to analyze data for every core in separate instance of the tool. Fortunately there is Eclipse plugin called *Impulse* which can load several trace files and makes it possible to inspect events from both cores in one view. Also this plugin has no limitation of 1,000,000 events as compared to free version of SystemView.
|
|
|
|
|
|
Good instruction on how to install, configure and visualize data in Impulse from one core can be found `here <https://mcuoneclipse.com/2016/07/31/impulse-segger-systemview-in-eclipse/>`_.
|
|
|
|
|
|
@@ -438,7 +451,7 @@ Good instruction on how to install, configure and visualize data in Impulse from
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
- If you have problems with visualization (no data are shown or strange behavior of zoom action is observed) you can try to delete current signal hierarchy and double-click on the necessary file or port. Eclipse will ask you to create new signal hierarchy.
|
|
|
+ If you have problems with visualization (no data are shown or strange behavior of zoom action is observed) you can try to delete current signal hierarchy and double click on the necessary file or port. Eclipse will ask you to create new signal hierarchy.
|
|
|
|
|
|
|
|
|
.. _app_trace-gcov-source-code-coverage:
|
|
|
@@ -462,7 +475,7 @@ Generally, using Gcov to compile and run programs on the Host will undergo these
|
|
|
Gcov and Gcovr in ESP-IDF
|
|
|
"""""""""""""""""""""""""""
|
|
|
|
|
|
-Using Gcov in ESP-IDF is complicated due to the fact that the program is running remotely and not on the host (i.e., on the target). The code coverage data (i.e., the ``.gcda`` files) is initially stored on the target itself. OpenOCD is then used to dump the code coverage data from the target to the host via JTAG during runtime. Using Gcov in ESP-IDF can be split into the following steps.
|
|
|
+Using Gcov in ESP-IDF is complicated by the fact that the program is running remotely from the Host (i.e., on the target). The code coverage data (i.e., the ``.gcda`` files) is initially stored on the target itself. OpenOCD is then used to dump the code coverage data from the target to the host via JTAG during runtime. Using Gcov in ESP-IDF can be split into the following steps.
|
|
|
|
|
|
1. :ref:`app_trace-gcov-setup-project`
|
|
|
2. :ref:`app_trace-gcov-dumping-data`
|
|
|
@@ -479,7 +492,7 @@ Compiler Option
|
|
|
In order to obtain code coverage data in a project, one or more source files within the project must be compiled with the ``--coverage`` option. In ESP-IDF, this can be achieved at the component level or the individual source file level:
|
|
|
|
|
|
- To cause all source files in a component to be compiled with the ``--coverage`` option, you can add ``target_compile_options(${COMPONENT_LIB} PRIVATE --coverage)`` to the ``CMakeLists.txt`` file of the component.
|
|
|
-- To cause a select number of source files (e.g. ``source1.c`` and ``source2.c``) in the same component to be compiled with the ``--coverage`` option, you can add ``set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage)`` to the ``CMakeLists.txt`` file of the component.
|
|
|
+- To cause a select number of source files (e.g. ``sourec1.c`` and ``source2.c``) in the same component to be compiled with the ``--coverage`` option, you can add ``set_source_files_properties(source1.c source2.c PROPERTIES COMPILE_FLAGS --coverage)`` to the ``CMakeLists.txt`` file of the component.
|
|
|
|
|
|
When a source file is compiled with the ``--coverage`` option (e.g. ``gcov_example.c``), the compiler will generate the ``gcov_example.gcno`` file in the project's build directory.
|
|
|
|
|
|
@@ -488,7 +501,7 @@ Project Configuration
|
|
|
|
|
|
Before building a project with source code coverage, ensure that the following project configuration options are enabled by running ``idf.py menuconfig``.
|
|
|
|
|
|
-- Enable the application tracing module by choosing *Trace Memory* for the :ref:`CONFIG_APPTRACE_DESTINATION` option.
|
|
|
+- Enable the application tracing module by choosing *Trace Memory* for the :ref:`CONFIG_APPTRACE_DESTINATION1` option.
|
|
|
- Enable Gcov to host via the :ref:`CONFIG_APPTRACE_GCOV_ENABLE`
|
|
|
|
|
|
.. _app_trace-gcov-dumping-data:
|
|
|
@@ -496,23 +509,23 @@ Before building a project with source code coverage, ensure that the following p
|
|
|
Dumping Code Coverage Data
|
|
|
""""""""""""""""""""""""""
|
|
|
|
|
|
-Once a project has been complied with the ``--coverage`` option and flashed onto the target, code coverage data will be stored internally on the target (i.e., in trace memory) whilst the application runs. The process of transferring code coverage data from the target to the Host is known as dumping.
|
|
|
+Once a project has been complied with the ``--coverage`` option and flashed onto the target, code coverage data will be stored internally on the target (i.e., in trace memory) whilst the application runs. The process of transferring code coverage data from the target to the Host is know as dumping.
|
|
|
|
|
|
The dumping of coverage data is done via OpenOCD (see :doc:`JTAG Debugging <../api-guides/jtag-debugging/index>` on how to setup and run OpenOCD). A dump is triggered by issuing commands to OpenOCD, therefore a telnet session to OpenOCD must be opened to issue such commands (run ``telnet localhost 4444``). Note that GDB could be used instead of telnet to issue commands to OpenOCD, however all commands issued from GDB will need to be prefixed as ``mon <oocd_command>``.
|
|
|
|
|
|
When the target dumps code coverage data, the ``.gcda`` files are stored in the project's build directory. For example, if ``gcov_example_main.c`` of the ``main`` component was compiled with the ``--coverage`` option, then dumping the code coverage data would generate a ``gcov_example_main.gcda`` in ``build/esp-idf/main/CMakeFiles/__idf_main.dir/gcov_example_main.c.gcda``. Note that the ``.gcno`` files produced during compilation are also placed in the same directory.
|
|
|
|
|
|
-The dumping of code coverage data can be done multiple times throughout an application's lifetime. Each dump will simply update the ``.gcda`` file with the newest code coverage information. Code coverage data is accumulative, thus the newest data will contain the total execution count of each code path over the application's entire lifetime.
|
|
|
+The dumping of code coverage data can be done multiple times throughout an application's life time. Each dump will simply update the ``.gcda`` file with the newest code coverage information. Code coverage data is accumulative, thus the newest data will contain the total execution count of each code path over the application's entire lifetime.
|
|
|
|
|
|
ESP-IDF supports two methods of dumping code coverage data form the target to the host:
|
|
|
|
|
|
-* Instant Run-Time Dump
|
|
|
+* Instant Run-Time Dumpgit
|
|
|
* Hard-coded Dump
|
|
|
|
|
|
Instant Run-Time Dump
|
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
-An Instant Run-Time Dump is triggered by calling the ``{IDF_TARGET_NAME} gcov`` OpenOCD command (via a telnet session). Once called, OpenOCD will immediately preempt the {IDF_TARGET_NAME}'s current state and execute a builtin IDF Gcov debug stub function. The debug stub function will handle the dumping of data to the Host. Upon completion, the {IDF_TARGET_NAME} will resume its current state.
|
|
|
+An Instant Run-Time Dump is triggered by calling the ``{IDF_TARGET_NAME} gcov`` OpenOCD command (via a telnet session). Once called, OpenOCD will immediately preempt the {IDF_TARGET_NAME}'s current state and execute a builtin IDF Gcov debug stub function. The debug stub function will handle the dumping of data to the Host. Upon completion, the {IDF_TARGET_NAME} will resume it's current state.
|
|
|
|
|
|
Hard-coded Dump
|
|
|
~~~~~~~~~~~~~~~
|
|
|
@@ -545,10 +558,10 @@ Once the code coverage data has been dumped, the ``.gcno``, ``.gcda`` and the so
|
|
|
|
|
|
Both Gcov and Gcovr can be used to generate code coverage reports. Gcov is provided along with the Xtensa toolchain, whilst Gcovr may need to be installed separately. For details on how to use Gcov or Gcovr, refer to `Gcov documentation <https://gcc.gnu.org/onlinedocs/gcc/Gcov.html>`_ and `Gcovr documentation <http://gcovr.com/>`_.
|
|
|
|
|
|
-Adding Gcov Build Target to Project
|
|
|
+Adding Gcovr Build Target to Project
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
-To make report generation more convenient, users can define additional build targets in their projects such that the report generation can be done with a single build command.
|
|
|
+To make report generation more convenient, users can define additional build targets in their projects such report generation can be done with a single build command.
|
|
|
|
|
|
Add the following lines to the ``CMakeLists.txt`` file of your project.
|
|
|
|