Home
last modified time | relevance | path

Searched full:multimodal (Results 1 – 25 of 26) sorted by relevance

12

/aosp_15_r20/external/executorch/examples/models/llava/
H A DREADME.md4 - Demonstrate how to export LLavA multimodal model to generate ExecuTorch .PTE file.
24 multimodal model that combines a vision encoder and Vicuna (a LLama2 based text
26 impressive chat capabilities mimicking spirits of the cutting edge multimodal
/aosp_15_r20/external/executorch/extension/llm/runner/
H A Dimage_prefiller.h9 // Given a image tensor, prefill the KV cache of a multimodal LLM.
29 * @param image The image input to the multimodal LLM.
H A Dmultimodal_runner.h9 // A simple multimodal LLM runner that includes preprocessing and post
47 "Creating Multimodal LLM runner: model_path=%s, tokenizer_path=%s", in temperature_()
/aosp_15_r20/external/executorch/examples/models/llama/tokenizer/
H A Dllama_tiktoken.h17 Multimodal, enumerator
H A Dllama_tiktoken.cpp77 case Version::Multimodal: in _get_special_tokens()
/aosp_15_r20/external/googleapis/google/ai/generativelanguage/v1beta/
H A Dgenerativelanguage_v1beta.yaml20 to be multimodal. It can generalize and seamlessly understand, operate
H A Dgenerative_service.proto33 // API for using Large Models that generate multimodal content and have
/aosp_15_r20/external/googleapis/google/ai/generativelanguage/v1/
H A Dgenerativelanguage_v1.yaml15 to be multimodal. It can generalize and seamlessly understand, operate
H A Dgenerative_service.proto32 // API for using Large Models that generate multimodal content and have
/aosp_15_r20/external/googleapis/google/cloud/aiplatform/v1/
H A Dprediction_service.proto166 // Generate content with multimodal inputs.
180 // Generate content with multimodal inputs with streaming support.
/aosp_15_r20/external/executorch/examples/models/llama/tokenizer/test/
H A Dtest_tiktoken.cpp28 tokenizer_ = get_tiktoken_for_llama(Version::Multimodal); in SetUp()
/aosp_15_r20/external/executorch/examples/models/llava/runner/
H A Dllava_runner.h9 // A simple multimodal LLM runner that includes preprocessing and post
/aosp_15_r20/external/googleapis/google/cloud/aiplatform/v1beta1/
H A Dprediction_service.proto165 // Generate content with multimodal inputs.
179 // Generate content with multimodal inputs with streaming support.
/aosp_15_r20/external/executorch/examples/models/llama3_2_vision/text_decoder/
H A Dmodel.py40 Just the text decoder portions of the Llama3.2 multimodal model.
/aosp_15_r20/external/pytorch/docs/source/notes/
H A Dfsdp.rst28 <https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-dist…
/aosp_15_r20/external/python/cpython3/Doc/library/
Dstatistics.rst410 Now handles multimodal datasets by returning the first mode encountered.
/aosp_15_r20/external/python/cpython3/Doc/whatsnew/
D3.8.rst1884 when given multimodal data. Instead, it returns the first mode
/aosp_15_r20/external/googleapis/
H A Dapi-index-v1.json8296 …dels. Gemini is our most capable model, built from the ground up to be multimodal. It can generali…
8428 …dels. Gemini is our most capable model, built from the ground up to be multimodal. It can generali…
/aosp_15_r20/external/cldr/tools/cldr-code/src/main/resources/org/unicode/cldr/util/data/transforms/
H A Dinternal_raw_IPA-old.txt132859 multimodal %7157
/aosp_15_r20/packages/inputmethods/LatinIME/dictionaries/
Den_GB_wordlist.combined.gz1dictionary=main:en_gb,locale=en_GB,description=English (UK),date ...
Den_US_wordlist.combined.gz
Den_wordlist.combined.gz1dictionary=main:en,locale=en,description=English,date=1414726273, ...
Dpt_BR_wordlist.combined.gz
Dfr_wordlist.combined.gz
Dpt_PT_wordlist.combined.gz

12