Modelhub APIs

Documentation of the Modelhub REST API and Python API

REST API

The REST API is the main interface to a model packaged with the Modelhub framework. The REST API of a running model can be reached under http://<ip of model>:<port>/api/<call>. For example http://localhost:80/api/get_config to retrieve a JSON string with the model configuration.

The REST API is automatically instantiated when you start a model via python start.py <your model name>. See the following documentation of the ModelHubRESTAPI class for a documentation of all available functions.

REST API Class

class modelhubapi.restapi.ModelHubRESTAPI(model, contrib_src_dir)[source]
get_config()[source]

GET method

Returns:Model configuration dictionary.
Return type:application/json

GET method

Returns:All of modelhub’s, the model’s, and the sample data’s legal documents as dictionary. If one (or more) of the legal files don’t exist, the error will be logged with the corresponding key. Dictionary keys are:
  • modelhub_license
  • modelhub_acknowledgements
  • model_license
  • sample_data_license
Return type:application/json
get_model_io()[source]

GET method

Returns:The model’s input/output sizes and types as dictionary. Convenience function, as this is a subset of what get_config() returns
Return type:application/json
get_model_files()[source]

GET method

Returns:The trained deep learning model in its native format and all its asscociated files in a single zip folder.
Return type:application/zip
get_samples()[source]

GET method

Returns:List of URLs to all sample files associated with the model.
Return type:application/json
predict()[source]

GET/POST method

Returns:Prediction result on input data. Return type/format as specified in the model configuration (see get_model_io()), and wrapped in json. In case of an error, returns a dictionary with error info.
Return type:application/json

GET method

Parameters:fileurl – URL to input data for prediciton. Input type must match specification in the model configuration (see get_model_io()) URL must not contain any arguments and should end with the file extension.

GET Example: :code: curl -X GET http://localhost:80/api/predict?fileurl=<URL_OF_FILE>

POST method

Parameters:file – Input file with data for prediction. Input type must match specification in the model configuration (see get_model_io())

POST Example: :code: curl -i -X POST -F file=@<PATH_TO_FILE> `http://localhost:80/api/predict

predict_sample()[source]

GET method

Performs prediction on sample data.

Note

Currently you cannot use predict() for inference on sample data hosted under the same IP as the model API. This function is a temporary workaround. To be removed in the future.

Returns:Prediciton result on input data. Return type as specified in the model configuration (see get_model_io()), and wrapped in json. In case of an error, returns a dictionary with error info.
Return type:application/json
Parameters:filename – File name of the sample data. No folders or URLs.

Python API

The Python API is a convenience interface to a model when you have direct access to the modelhub runtime environment, i.e. when you are inside the Docker running the model. This is, for example, the case if you work with the sandbox Jupyter notebook provided with the model you are running.

When you are working inside the Docker running a model, you can import the Modelhub Python API via from modelapi import model. This is a convenience import, which implicitly takes care of initializing the ModelHubAPI with the model in the current Docker. You would then call the API (e.g. to get the model config) like this configuration = model.get_config().

Python API Class

class modelhubapi.pythonapi.ModelHubAPI(model, contrib_src_dir)[source]

Generic interface to access a model.

get_config()[source]
Returns:Model configuration.
Return type:dict
Returns:All of modelhub’s, the model’s, and the sample data’s legal documents as dictionary. If one (or more) of the legal files don’t exist, the error will be logged with the corresponding key. Dictionary keys are:
  • modelhub_license
  • modelhub_acknowledgements
  • model_license
  • sample_data_license
Return type:dict
get_model_io()[source]
Returns:The model’s input/output sizes and types as dictionary. Convenience function, as this is a subset of what get_config() returns
Return type:dict
get_samples()[source]
Returns:Folder and file names of sample data bundled with this model. The diconary key “folder” holds the absolute path to the sample data folder in the model container. The key “files” contains a list of all file names in that folder. Join these together to get the full path to the sample files.
Return type:dict
predict(input_file_path, numpyToFile=True, url_root='')[source]

Preforms the model’s inference on the given input.

Parameters:
  • input_file_path (str or dict) – Path to input file to run inference on. Either a direct input file or a json containing paths to all input files needed for the model to predict. The appropriate structure for the json can be found in the documentation. If used directly, you can also pass a dict with the keys.
  • numpyToFile (bool) – Only effective if prediction is a numpy array. Indicates if numpy outputs should be saved and a path to it is returned. If false, a json-serializable list representation of the numpy array is returned instead. List representations is very slow with large numpy arrays.
  • url_root (str) – Url root added by the rest api.
Returns:

Prediction result on input data. Return type/foramt as specified in the model configuration (see get_model_io()). In case of an error, returns a dictionary with error info.

Return type:

dict, list, or numpy array