gobbli.model.use.model module¶
-
class
gobbli.model.use.model.
USE
(data_dir=None, load_existing=False, use_gpu=False, nvidia_visible_devices='all', logger=None, **kwargs)[source]¶ Bases:
gobbli.model.base.BaseModel
,gobbli.model.mixin.EmbedMixin
Wrapper for Universal Sentence Encoder embeddings: https://tfhub.dev/google/universal-sentence-encoder/4
Create a model.
- Parameters
data_dir¶ (
Optional
[Path
]) – Optional path to a directory used to store model data. If not given, a unique directory under GOBBLI_DIR will be created and used.load_existing¶ (
bool
) – If True,data_dir
should be a directory that was previously used to create a model. Parameters will be loaded to match the original model, and user-specified model parameters will be ignored. If False, the data_dir must be empty if it already exists.use_gpu¶ (
bool
) – If True, use the nvidia-docker runtime (https://github.com/NVIDIA/nvidia-docker) to expose NVIDIA GPU(s) to the container. Will cause an error if the computer you’re running on doesn’t have an NVIDIA GPU and/or doesn’t have the nvidia-docker runtime installed.nvidia_visible_devices¶ (
str
) – Which GPUs to make available to the container; ignored ifuse_gpu
is False. If not ‘all’, should be a comma-separated string: ex.1,2
.logger¶ (
Optional
[Logger
]) – If passed, use this logger for logging instead of the default module-level logger.**kwargs¶ – Additional model-specific parameters to be passed to the model’s
init()
method.
-
build
()¶ Perform any pre-setup that needs to be done before running the model (building Docker images, etc).
-
property
class_weights_dir
¶ The root directory used to store initial model weights (before fine-tuning). These should generally be some pretrained weights made available by model developers. This directory will NOT be created by default; models should download their weights and remove the weights directory if the download doesn’t finish properly.
Most models making use of this directory will have multiple sets of weights and will need to store those in subdirectories under this directory.
- Return type
Path
- Returns
The path to the class-wide weights directory.
-
data_dir
()¶ - Return type
Path
- Returns
The main data directory unique to this instance of the model.
-
embed
(embed_input, embed_dir_name=None)¶ Generates embeddings using a model and the params in the given
gobbli.io.EmbedInput
.- Parameters
embed_input¶ (
EmbedInput
) – Contains various parameters needed to determine how to generate embeddings and what data to generate embeddings for.embed_dir_name¶ (
Optional
[str
]) – Optional name to store embedding input and output under. The directory is always created under the model’sdata_dir
. If a name is not given, a unique name is generated via a UUID. If a name is given, that directory must not already exist.
- Return type
- Returns
Output of training.
-
embed_dir
()¶ The directory to be used for data related to embedding (weights, embeddings, etc)
- Return type
Path
- Returns
Path to the embedding data directory.
-
property
image_tag
¶ - Return type
str
- Returns
The Docker image tag to be used for the USE container.
-
property
info_path
¶ - Return type
Path
- Returns
The path to the model’s info file, containing information about the model including the type of model, gobbli version it was trained using, etc.
-
init
(params)[source]¶ See
gobbli.model.base.BaseModel.init()
.USE parameters:
use_model
(str
): Name of a USE model to use. SeeUSE_MODEL_ARCHIVES
for a listing of available USE models.
-
property
logger
¶ - Return type
Logger
- Returns
A logger for derived models to use.
-
property
metadata_path
¶ - Return type
Path
- Returns
The path to the model’s metadata file containing model-specific parameters.
-
classmethod
model_class_dir
()¶ - Return type
Path
- Returns
A directory shared among all classes of the model.
-
property
weights_dir
¶ - Return type
Path
- Returns
Directory containing pretrained weights for this instance.
-
gobbli.model.use.model.
USE_MODEL_ARCHIVES
= {'universal-sentence-encoder': 'https://tfhub.dev/google/universal-sentence-encoder/4?tf-hub-format=compressed', 'universal-sentence-encoder-large': 'https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed', 'universal-sentence-encoder-multilingual': 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3?tf-hub-format=compressed', 'universal-sentence-encoder-multilingual-large': 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3?tf-hub-format=compressed'}¶ A mapping from model names to TFHub URLs. “universal-sentence-encoder” is a safe default for most situations. Larger models require more time and GPU memory to run.