Skip to main content
Version: 1.0.0

Object Detection


Build and Run Deep learning pipelines on Katonic-studio#

A pipeline comprises one or more nodes that are (in many cases) connected to define execution dependencies. Each node is implemented by a component and typically performs only a single task, such as loading data, processing data, training a model, or sending an email.

In this use case we learn how to deploy any deep learning pre-trained model on Katonic studio. Here, we have used YOLO( you only look once) model to detect various objects. One can clone it from github repository Yolo Object Detection. Step by step instructions are given below.

Step 1: Creating a notebook#

Once you signed in to Katonic platform, you will see “notebook” section on left-hand side panel, click on that. After clicking enter the name of the notebook, choose “Katonic-studio” as Image and choose CPU and Memory.

Subsequently, you will see the screen just like the image attached below. As we are building pipeline using “Katonic studio” service, just click on that and you will navigate to the subsequent screen.

Step 2: cloning the Github repository.#

If you have built your own model and it is on Github, you can directly clone it to your notebook using git clone your_project.git command. Otherwise one can use Yolo Object Detection repository, and clone it to the notebook. After cloning, your notebook will look something like this -

Step 3: Pipeline building and configuration#

One of the advantages of Katonic studio is, we can drag and drop ipython notebooks and connect just like a flowchart. Here, we have two notebooks: one for building yolo model and one for testing the model.

3.1 Drag both of them into workspace and connect them.

3.2 After connecting you will see red dot on top of the each notebooks. To get rid of it, we have to configure each of the notebooks.

3.3 Right click “YOLO_model” notebook node and select Open Properties to review its configuration.

3.4 Select the image compatible with your notebook node, see the image below.

3.5 Next, we have to specify file dependency if there require. Here, notebook “YOLO_testing” uses six independent files to generate the output. So, add those files one by one.

After adding all the files, you can see all the configuration of the node by hovering over it. So, you will see something like this- 

Step 4: Define a runtime environment configuration#

A runtime configuration contains connectivity information for a Kubeflow Pipelines instance and S3-compatible cloud storage. In this tutorial you will use the GUI to define the configuration.

From the pipeline editor tool bar (or the JupyterLab sidebar on the left side) choose Runtimes to open the runtime management panel.

You do not have to worry about runtime as we have our pre-defined configuration. However, you can click + and New Kubeflow Pipelines runtime to create a new configuration for your Kubeflow Pipelines deployment.

Step 5: Run the pipeline#

Before running the pipeline on Kubeflow, make sure it runs on local environment. In local environment, “YOLO_testing” notebook will provide YOLO based predictions on test images. You can verify if by comparing your output with this image-

Now, we are ready to deploy our pipeline on Kubeflow.

  1. Open the run wizard.

  1. The Pipeline Name is pre-populated with the pipeline file name. The specified name is used to name the pipeline and experiment in Kubeflow Pipelines.

  2. Select Kubeflow Pipelines as Runtime platform.

  1. From the Runtime configuration drop down select the runtime configuration you just created.

  2. Start the pipeline run. The pipeline artifacts (notebooks, Python scripts and file input dependencies) are gathered, packaged, and uploaded to cloud storage. The pipeline is compiled, uploaded to Kubeflow Pipelines, and executed in an experiment.

Katonic studio automatically creates a Kubeflow Pipelines experiment using the pipeline name. For example, if you named the pipeline hello-generic-world, Katonic studio creates an experiment named hello-generic-world.

Each time you run a pipeline with the same name, it is uploaded as a new version, allowing for comparison between pipeline runs.

The confirmation message contains two links:

  • Run details: provides access to the Kubeflow Pipelines UI where you monitor the pipeline execution progress.
  • Object storage: provides access to the cloud storage where you access the input artifacts and output artifacts. (This link might not work if the configured cloud storage does not have a GUI interface or if the URL is different from the endpoint URL you've configured.)

Step 6: Monitor your pipeline#

Go to main katonic dashboard, click on “Pipelines” and then “Runs”. It will provide list of all running and accomplished runs.

After clicking on our current run, you will see something like-

If we want to generate YAML file for reproducibility of our pipeline, we can click on “Export pipeline” button in Katonic-studio and select YAML format, see below.

Summary:

This concludes the “Build and Run Deep learning pipelines on Katonic-studio

”tutorial. You've learned how to

  • Create a Kubeflow Pipelines runtime configuration

  • Run a pipeline on Kubeflow Pipelines

  • Monitor the pipeline run progress in the Katonic dashboard

  • Export a pipeline to a Kubeflow Pipelines native format