xref: /aosp_15_r20/external/pigweed/pw_tokenizer/get_started.rst (revision 61c4878ac05f98d0ceed94b57d316916de578985)
1.. _module-pw_tokenizer-get-started:
2
3=============================
4Get started with pw_tokenizer
5=============================
6.. pigweed-module-subpage::
7   :name: pw_tokenizer
8
9.. _module-pw_tokenizer-get-started-overview:
10
11--------
12Overview
13--------
14There are two sides to ``pw_tokenizer``, which we call tokenization and
15detokenization.
16
17* **Tokenization** converts string literals in the source code to binary tokens
18  at compile time. If the string has printf-style arguments, these are encoded
19  to compact binary form at runtime.
20* **Detokenization** converts tokenized strings back to the original
21  human-readable strings.
22
23Here's an overview of what happens when ``pw_tokenizer`` is used:
24
251. During compilation, the ``pw_tokenizer`` module hashes string literals to
26   generate stable 32-bit tokens.
272. The tokenization macro removes these strings by declaring them in an ELF
28   section that is excluded from the final binary.
293. After compilation, strings are extracted from the ELF to build a database of
30   tokenized strings for use by the detokenizer. The ELF file may also be used
31   directly.
324. During operation, the device encodes the string token and its arguments, if
33   any.
345. The encoded tokenized strings are sent off-device or stored.
356. Off-device, the detokenizer tools use the token database to decode the
36   strings to human-readable form.
37
38.. _module-pw_tokenizer-get-started-integration:
39
40Integrating with Bazel / GN / CMake projects
41============================================
42Integrating ``pw_tokenizer`` requires a few steps beyond building the code. This
43section describes one way ``pw_tokenizer`` might be integrated with a project.
44These steps can be adapted as needed.
45
46#. Add ``pw_tokenizer`` to your build. Build files for GN, CMake, and Bazel are
47   provided. For Make or other build systems, add the files specified in the
48   BUILD.gn's ``pw_tokenizer`` target to the build.
49#. Use the tokenization macros in your code. See
50   :ref:`module-pw_tokenizer-tokenization`.
51#. Ensure the ``.pw_tokenizer.*`` sections are included in your output ELF file:
52
53   * In GN and CMake, this is done automatically.
54   * In Bazel, include ``"@pigweed//pw_tokenizer:linker_script"`` in the
55     ``deps`` of your main binary rule (assuming you're already overriding the
56     default linker script).
57   * If your binary does not use a custom linker script, you can pass
58     ``add_tokenizer_sections_to_default_script.ld`` to the linker which will
59     augment the default linker script (rather than override it).
60   * Alternatively, add the contents of ``pw_tokenizer_linker_sections.ld`` to
61     your project's linker script.
62
63#. Compile your code to produce an ELF file.
64#. Run ``database.py create`` on the ELF file to generate a CSV token
65   database. See :ref:`module-pw_tokenizer-managing-token-databases`.
66#. Commit the token database to your repository. See notes in
67   :ref:`module-pw_tokenizer-database-management`.
68#. Integrate a ``database.py add`` command to your build to automatically update
69   the committed token database. In GN, use the ``pw_tokenizer_database``
70   template to do this. See :ref:`module-pw_tokenizer-update-token-database`.
71#. Integrate ``detokenize.py`` or the C++ detokenization library with your tools
72   to decode tokenized logs. See :ref:`module-pw_tokenizer-detokenization`.
73
74Using with Zephyr
75=================
76When building ``pw_tokenizer`` with Zephyr, 3 Kconfigs can be used currently:
77
78* ``CONFIG_PIGWEED_TOKENIZER`` will automatically link ``pw_tokenizer`` as well
79  as any dependencies.
80* ``CONFIG_PIGWEED_TOKENIZER_BASE64`` will automatically link
81  ``pw_tokenizer.base64`` as well as any dependencies.
82* ``CONFIG_PIGWEED_DETOKENIZER`` will automatically link
83  ``pw_tokenizer.decoder`` as well as any dependencies.
84
85Once enabled, the tokenizer headers can be included like any Zephyr headers:
86
87.. code-block:: cpp
88
89   #include <pw_tokenizer/tokenize.h>
90
91.. note::
92  Zephyr handles the additional linker sections via
93  ``pw_tokenizer_zephyr.ld`` which is added to the end of the linker file
94  via a call to ``zephyr_linker_sources(SECTIONS ...)``.
95