Search by Tags

Torizon IDE Backend Architecture and Internals

 

Article updated at 05 Apr 2021
Subscribe for this article updates

Introduction

Providing the capability of running applications inside containers gives Torizon the ability to support many different languages and programming environments.

This, unfortunately, does not mean that everything will work out of the box.

Some effort is required to manage different aspects:

  • Build code for the right target (for systems that generate native code)
  • Host the code inside a container
  • Deploy code and container to the target in an efficient way
  • Support remote debugging

Configuring all those things is time-consuming, error-prone, and may lead to non-optimal scenarios where users have to deal with too many details every time they build and run their applications. They can even have very inefficient development setups with very long wait times after each minor change to the code, lots of manual operations to perform every time the application should run on the target, or no chance at all to debug code interactively.

Supporting all available editors, IDEs, and development environments would not be possible, but Toradex provides an open-source backend that can be integrated into all the development environments that provide some kind of extensibility and support for invoking REST-based APIs.

Toradex currently provides fully implemented extensions for:

  • Visual Studio 2019 (C/C++ using MSBuild)
  • Visual Studio Code (C/C++ with different build systems, Python and C#)

This article describes the features and architecture of the Torizon IDE backend, which can be useful for users of extensions that want to understand the system and leverage its capabilities, and for developers willing to better integrate their favorite editor/ide/language with Torizon.

The purpose of the IDE backend is to simplify the task of packaging an application as a container, deploy, run and debug it on a target device.

The IDE backend is implemented as a Python application called moses, designed to run in the background and receive requests via a REST API.

The API is declared using openAPI v2.0 (same version supported by docker) and can be found in the source tree under moses/swagger.yaml.

IDE backend has been tested on Linux and Windows. It runs on the developer's machine and securely connects to the target via SSH.

This article complies to the Typographic Conventions for Torizon Documentation.

Basic Concepts

Before looking into the implementation it can be useful to understand some basic concepts used by IDE backend.

Debug/Release Configurations

The backend can deploy applications in debug or release mode, so many of the settings you’ll see in this document can be specified for "debug" configuration, for "release" configuration, or as “common”, using the same settings for both configurations.

When looking for a setting the backend tries to find a setting specific for the current configuration. If it does not find it, it tries with the common settings, and if this one also does not exists will use the default value.

Platform

A platform defines a class of applications (ex: C/C++ console applications, Qt for Python applications, etc.) and a CPU architecture (ex: ARM32, ARM64).

A platform can support one or more runtimes.

The runtime can be defined as a development language (C/C++, Python), a framework (.NET, ASP.NET), or a combination of the two (C/C++ for Qt, Qt for Python, etc.).

The goal of a platform is to provide a basic template that simplifies the steps required to build, deploy, run and debug an application on a device running Torizon.

A platform defines a base container image (optionally different images for debug and release configurations). This image is described by a Dockerfile that can be built as it is ,but also be used as a template to add application-specific features (more about this later).

Optionally a platform can provide also an SDK that is, again, a container template that configures the right environment for building an application, without requiring the setup of complex toolchains on the developer’s PCs.

Platforms can be compatible only with some modules. For example, the 64-bit platform will support only 64-bit capable CPUs.

In addition to the container templates, the platform can provide also parameters used to run the container and additional scripts, docker-compose files to run additional services, and containers required to run a specific kind of application (for example to start the Wayland compositor when running a graphical application).

The platforms are defined using YAML files. Toradex provided platforms are in the “platforms” subfolder of Moses setup and should not be edited by users. Users may add additional platforms under .moses/platforms inside their home folder.

Platforms also have a set of generic properties that may be used by the IDE plugin to configure, for example, compiler parameters or other options.

Tags can be used in YAML files, dockerfiles, docker-compose files, and scripts and can be replaced at build/runtime with values configured for a specific application.

Tags use the following format:

#%<object>.<property>%#

Properties can be exposed by platform and application objects. A detailed list will be in the reference section of this document.

See an example of how the platform YAML file should look like:

---
# set to true for platforms that are provided by Toradex
standard: true
# descriptive information
name: python3 arm64v8
version: "1.0"
description: minimal python3 setup on debian
# supported modules (using product id),* means all modules except unsupported ones
supportedmodels: ["*"]
unsupportedmodels: []
# defines base image and dockerfile templates for the containers
baseimage:
  common:
    - torizon/arm64v8-debian-base
    - buster
  debug:
  release:
# names of dockerfile templates
container:
  common:
  debug: debug.dockerfile
  release: release.dockerfile
# defines what languages/runtimes are supported by this platform
runtimes:
  - python3
# information about how containers should run
ports:
  common: { "6502/tcp": null }
  debug: {}
  release: {}
volumes:
  common: {}
  debug: {}
  release: {}
devices:
  common: []
  debug: []
  release: []
privileged: false
extraparms:
  common: {}
  debug: {}
  release: {}
# information about the SDK (if required)
usesdk: false
# set to false to use a container template
# additional platform-specific properties that can be used in projects or plugins
props:
  common: {}
  debug: {}
  release: {}
 
# additional scripts (docker-compose or shell) that can be used to start/stop
# the container
# If you use docker-compose file the application container should NOT be started by compose
dockercomposefile:
  common: null
  debug: null
  release: null
startupscript:
  common: null
  debug: null
  release: null
shutdownscript:
  common: null
  debug: null
  release: null
networks:
  common: []
  debug: []
  release: []

The YAML file contains also reference to base images for the container and SDK container templates.

This seems to be a duplication, but it is used by the tool to pull all containers required by the different platforms without having to parse the individual dockerfiles.

Platforms, as Application Configurations defined below, have a set of well-defined standard properties and then an additional set of custom properties defined as key/value pairs in the "props" filed.

Those properties can be defined for a specific platform and can be used to specify platform-specific information.

For example, for C/C++ applications the system defines a "prefix" used to invoke toolchain components (ex: arm-linux-gnueabihf-).

Container templates

Each platform can provide container templates for the target (in release and debug configuration) and, eventually, for the SDK.

Container templates are dockerfiles with some tags that will be replaced with platform or application configuration (discussed later) properties at build time.

Those tags are identified with #% as escape characters. For example:

#%application.buildcommands%#

Below is a container template for a generic Debian C/C++. As you can see the template does not add any package on top of the current base container, but tags can be redefined at the application level to add packages and provide some configuration.

FROM torizon/arm32v7-debian-base:buster
 
#%application.expose%#
#%application.arg%#
 
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND="noninteractive"
 
#%application.env%#
 
# commands that should be run before installing packages (ex: to add a feed or keys)
#%application.preinstallcommands%#
 
# your regular RUN statements here
# Install required packages
RUN if [[ -z "#%application.extrapackages%#" ]]; then \
    apt-get -q -y update \
    && apt-get -q -y install #%application.extrapackages%# \
    && rm -rf /var/lib/apt/lists/* ; \
    fi
 
# commands that should be run after all packages have been installed (RUN/COPY/ADD)
#%application.buildfiles%#
#%application.buildcommands%#
 
#%application.targetfiles%#
 
USER #%application.username%#
 
# commands that will run on the target (ENTRYPOINT or CMD)
#%application.targetcommands%#

A debug container usually also includes components required by the IDE to be able to debug an application running inside that container.

For example, for Visual Studio C/C++ it will need an ssh server and gdb.

SDK

Some kinds of applications have to be compiled to native code before they could be deployed on a target.

For some language (C#, Go, etc.) installing the compiler and keeping multiple versions of the development environment may not be too complicated and even managed directly by the IDE.

For other languages (C and C++ for example), configuring and installing the toolchain is not simple on a Linux machine and almost impossible on a Windows one.

Containers can be used to host a development environment tailored to a specific application.

This will allow the installation of the same set of libraries required by the components installed in the runtime container, making build simpler even on different machines.

In some cases (like on Visual Studio 2019) SDKs are accessed via SSH. In any case, compilers and tools will run inside the container, in a sandboxed environment.

SDK can also be configured using a template file, being possible to have different templates for debug/release builds.

Below is an example of a generic SDK container for debian-based applications. The base image already includes multi-arch support and the cross-compiler, so the SDK just installs the additional “-dev” packages required by a specific application.

FROM torizon/torizon-sdk-arm32v7:buster
 
# commands that should be run before installing packages (ex: to add a feed or keys)
#%application.sdkpreinstallcommands%#
 
# your regular RUN statements here
# Install required packages
RUN if [ ! -z "#%application.devpackages%#" ]; then apt-get -q -y update \
    && apt-get -q -y install #%application.devpackages%#\
    && rm -rf /var/lib/apt/lists/*; \
    fi
 
#%application.sdkpostinstallcommands%#

Define a custom platform

If you have multiple applications using the same libraries or sharing the same configuration, you may want to define your own custom platform.

To do this you have to provide a config.yaml file, a dockerfile template (or one for release and one for debug), and, optionally, an SDK template - that may also have debug and release variants.

You need to put those files inside a subfolder of .moses/platforms in your home folder. You have to use a unique name for the folder, this will be the platform id.

The IDE backend does not dynamically reload platforms, so you've to restart it to ensure that your new platforms are ready to be used.

The easiest way to create a new custom platform is taking one of the existing ones (stored under the platforms folder of the backend installation or source repo) and "clone" it, changing it to match your requirements.

Application Configuration

An application configuration takes the generic definition provided by the platform and configures it for a specific scenario.

Usually, the Platform defines the general support required for a specific runtime/language, and the Application Configuration provides the additional details required to run a specific user application.

This "layered" approach avoids duplication of the basic settings, allowing plenty of customization chances to run your code in the exact way you want it to run.

The application configuration object always references a platform.

Application configuration is meant to be kept together with the application code, in a subfolder.

You may have multiple application configurations in the same codebase, all in his own separate subfolder.

IDE backend service does not load all application configurations at startup, it will be time-consuming and the service has no way to know where those configuration files are on your filesystem, so before operating on an application object it should be loaded by pointing the service to the right folder. This is usually done automatically by the IDE extensions.

The service can create an application, populating its folder and assigning it a unique id. This id will then be used when naming images, containers, etc, to avoid overlapping non-unique names from other apps.

Application configuration contains the same information related to container startup as the platform does (parameters can override or be in addition to those in the platform configuration, more on this in the following chapters).

Most of those parameters will be changed by the end-user using IDE plugins user interface, others like image IDs are used internally by the system.

The application configuration also contains keys used to automate ssh connections to the target container.

The application object is not only a data container, it also provides actions to:

  • Build the container for a specific configuration (debug/release) - This involves generating a real docker-compose file by replacing tags inside the platform’s template and then running docker build
  • Deploy the container to a specific device (more on devices later) - The container is deployed over ssh via docker save/docker load, this will avoid uploading/downloading to a docker hub. The image will be deployed only if not already on the target
  • Run/stop the container on a target - Creates an instance of the image
  • Build the SDK container - A dockerfile is generated from the template and then build. If the SDK container was already running it will be restarted
  • Run/stop the SDK container on the developer’s PC - Start and stop the SDK container enabling SSH connection if needed

Config.yaml

The application configuration file is stored in the application configuration’s root folder.

It contains also RSA keys that make it not very readable, but users will change the information inside it only using the IDE plugins.

Changing those files when the backend is running may lead to unpredictable results since it's not granted that manual changes will be preserved when the system needs to store additional information in the file.

If an SCM system is used, it's a good idea to store these files together with code, this will allow all users to build the images using the same IDs.

When this is not desirable (ex: when publishing an application as open-source on GitHub), the system provides features to remove all the IDs and re-generate them on the next re-opening.

See below an example of the application config.yaml file.

# unique id of the application, assigned on creation time
id: 4b83c734-6675-42ea-aeff-441e21c64f1f
 
# platform used as base for this application
platformid: arm64v8-debian-base_buster
 
# user account used to run the application inside the container
username: torizon
 
# this is updated when service changes the application via REST
# calls, this will allow build systems to decide when a rebuild
# of the images is required
# Date is changed only when properties that may impact image
# build are modified
modificationdate: '2019-12-20T06:36:37.962539'
 
# information used to start the application container
# this information will be merged with the one provided
# by the platform (each field has debug, release and common parts)
devices:
    common: []
    debug: []
    release: []
volumes:
    common: {}
    debug: {}
    release: {}
extraparms:
    common: {}
    debug: {}
    release: {}
networks:
    common: []
    debug: []
    release: []
ports:
    common: {}
    debug: {}
    release: {}
 
# scripts and docker-compose file used to start additional 
# containers/servers application-provided ones will override
# platform ones
shutdownscript:
    common: null
    debug: null
    release: null
startupscript:
    common: null
    debug: null
    release: null
dockercomposefile:
    common: null
    debug: null
    release: null
 
# custom application properties that can be used as tags in dockerfile
# templates
props:
    common:
        arg: ''
        buildcommands: ''
        buildfiles: ''
        command: ''
        devpackages: ''
        env: ''
        expose: ''
        extrapackages: ''
        preinstallcommands: ''
        sdkpostinstallcommands: ''
        sdkpreinstallcommands: ''
        targetfiles: ''
    debug:
        arg: 'ARG SSHUSERNAME=#%application.username%#'
    release: {}
 
# ID of the last built image
images:
    debug: sha256:53df69db9df438b205a07c548d104872df861edb10cafbf7722215b46156f216
    release: null
 
# ssh information    
privatekey: '-----BEGIN RSA PRIVATE KEY-----
 
    ...
    -----END RSA PRIVATE KEY-----
 
    '
publickey: ssh-rsa ...
 
# address of the SDK container (localhost:<port>)
sdksshaddress: null

Work folder

During image builds or other operations, the backend needs to generate or acquire additional files, for example, the Dockerfile used to build an image, or files that need to be included into it or deployed to the target.

This kind of content can be re-generated at any time and is stored in a subfolder of the folder that hosts config.yaml, named "work". This folder can be safely ignored by backup or SCM systems.

Device

The device object can be used to control and monitoring processes, images, and containers running on a Torizon device.

It's also used during deployment and debug to actually deploy an application (defined via Application Configuration and Platform) to an actual running device.

Each device is identified using its unique Toradex serial number (8 digits).

Devices can be detected using a serial or network connection.

On detection, the device will be configured by enabling Docker TCP/IP interface only on 127.0.0.1 and adding the keys for automated ssh login to the selected user account (by default: torizon).

After configuration, the device will be rebooted. No software will be installed on the target device.

Any further connection will be performed over SSH, so a local network connection is required to use the device with the IDE backend.

Devices can be used to implement monitoring functionality from an IDE, showing the status of the device in terms of resources, processes, containers.

Detected devices are saved in folders under .moses/devices folder in the user’s home.

Each folder will be named using the device's unique ID (serial number). This will allow the system to provide some information about them even when they are offline.

Config.yaml

The device configuration file is created automatically on detect, and users should not edit it directly.

An exception could be the hostname, to replace it with an IP if your network does not resolve names correctly.

See below an example of a device config.yaml file.

# this may need to be edited if your system can't resolve hostnames
hostname: apalis-imx6-05040105
 
# descriptive name, used for UI, can be changed to a more descriptive one
name: Toradex Apalis iMX6Q/D Module on Apalis Evaluation Board(05040105)
 
# Toradex model id 
model: 0028
 
# HW/SW version information
hwrev: V1.1C
kernelrelease: 5.4.2-0.0.0-devel+git.0a15b6b8f633
kernelversion: '#1 SMP Fri Dec 6 13:39:24 UTC 2019'
torizonversion: '19700101000000'
 
# user account information
username: torizon
homefolder: /home/torizon
 
# security keys
privatekey: '-----BEGIN RSA PRIVATE KEY-----
...
    -----END RSA PRIVATE KEY-----
 
    '
publickey: ...

The device provides actions to:

  • Read information about memory and storage
  • Read information about images and running containers
  • Delete images
  • Control containers (stop, start, delete)
  • Read the list of running processes on the host or on a specific container

Architecture

The IDE backend application should run on the developer’s PC and provides an HTTP/REST interface, by default on port 5000.

The API is defined using openAPI, and the information is exchanged in JSON format over an HTTP connection established on port 5000.

The server accepts only local connections.

It will talk to different entities:

  • IDE plugin(s) to expose its features to the development environment
  • Local instance of docker for building containers and running SDK instances
  • Remote device(s) main OS to monitor resources and processes
  • Remote device(s) main OS to transfer container images and applications
  • Remote instance(s) of docker to create and monitor container instances

The server communicates with a local docker instance using standard docker APIs. This will use a socket on Linux and a TCP local connection on Windows.

The IDE can communicate with SDK containers running on the PC or remote containers on the device, the extension will setup the configuration to make this process transparent to the end-user.

Ports and protocols used will depend on the specific runtime/debugger used.

Platforms should provide a debug configuration that configures that in a way that will be transparent for the end-user.

Communication with SDK containers can happen via SSH or by executing processes inside the container.

Application deployment is controlled by the IDE plugin and can be done to a folder on the host then synchronized with the device (via rsync) and mounted inside the container, or directly inside the container.

Deploying to a shared folder during debugging could be more efficient, mostly for applications that need external resources that are not changed after each build.

Application build

During the build step, the dockerfile template provided by the platform is converted into a complete dockerfile by replacing the tags with current values of application/platform properties.

Then the docker on the developer’s PC is used to build the container image.

On Windows, the system automatically uses emulation to build an ARM container on an x86/x64 machine.

On Linux, the emulation should be enabled during setup. Please check the article [Configure Build Environment for Torizon Containers[].

Optionally the application code can be built using an SDK container providing the right toolchain for the target, including the right set of libraries matching the components in the target container.

This is what currently happens for C/C++ applications built using Visual Studio 2019 or Visual Studio Code.

Application deployment

First, the system checks if the application’s container is already running and if the running instance is using the latest image (each image as a unique SHA256 identifier).

If this is not the case or if the container is not even running, the system deploys the new image over SSH using docker save and docker load features. This will avoid the need of uploading and downloading the image to a docker registry.

Then the application can be deployed using rsync, which is what happens with Visual Studio Code Extension.

The container is started and the application itself can be optionally deployed using rsync directly inside its filesystem, which is what happens in Visual Studio extenson.

Application debug

For debug deployments, the system will not start the application itself, but it will start what is needed to allow the IDE’s debugger to connect and debug it.

This will depend on the development environment and runtime, and the platform’s base template should take care of adding and running the right components.

The debug platform for C/C++ applications in Visual Studio, for example, configures an SSH server and adds gdb to the image.

In Visual Studio Code:

  • C/C++ debugging uses gdbserver running on the target and gdb running in the SDK container
  • python debugging uses ptvsdbg (Python debugger for Visual Studio)
  • .NET debugging uses an ssh connection to start vsdbg

API

The openAPI definition is self-documenting and all APIs functions can be viewed and tested using the moses service itself through its Swagger API documentation panel.

Start the service and point your browser to http://localhost:5000/api/ui/ to see the APIs grouped by tag, corresponding to the entities described in the first chapter, and be able to call them and see the returned values.

Clients for the APIs can be automatically generated using different tools, current Python and C# clients are generated using oneAPI Generator Cli running in a container.

Each client has also some documentation that could be used to better understand how to use the generated entities:

  • For Python the documentation is under: clients/python/docs
  • For C# is under: clients/csharp/TorizonAppDeploymentAPI/generated/docs

Tags Reference

Those tags can usually be modified using the IDEs plugins user interface, but for some specific scenarios, it may be required to edit them manually inside the YAML configuration files.

Tag Type Description
platform.id string Unique id of the platform (folder name)
platform.name string Mnemonic name of the platform
platform.version string Version of the platform
platform.folder path Absolute path of the folder where platform configuration is stored (can be used to add files to a container)
platform.baseimage string Base image of the container template (used in FROM clause of Dockerfile)
platform.sdkbaseimage string Base image of the SDK template (can be empty if the platform does not support an SDK)
platform.runtimes string[] Runtimes supported by the image. Currently supported runtimes are: ccpp, ccpp-no-ssh, python3, dotnet, aspnet
platform/application.ports key/value pairs Ports exposed by the container (those configured by application configuration will be merged with those provided by the platform, replacing those with same keys and adding others )
platform/application.volumes key/value pairs Volumes mounted in the container (where "key "is the local path or volume name, "value" is the path inside the container and, optionally, ",ro" to mount read-only)
platform/application.devices string[] List of paths of devices that should be mapped inside the container (ex: /dev/gpiochip0)
platform/application.networks string[] List of networks that should be connected to the container. For a network created by a docker-compose script associated with the appplication configuration you've to prepend "#%application.id%#_" to the actual name)
platform/application.extraparams key/value pairs This tag can be used to add specify some additional custom settings. Check docker python API documentation of container.run method for a list of the supported parameter. "Key" should be parameter name, "value" must be YAML representation of the value. For example to set host network mode, add "network_mode" as key and "host" as value.
platform/application.startupscript relative path The script that will be launched before starting the container, tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder, path must be relative. If the script is specified for both platform and application, only the application one is executed (but it can invoke the platform one that will be parsed and copied to the target anyway).
platform/application.shutdownscript relative path A script that will be launched after the container has been stopped. Tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder. If the script is specified for both platform and application, only the application one is executed (but it can invoke the platform one that will be parsed and copied to the target anyway).
platform/application.dockercomposefile relative path The docker-compose script that will be used to start other containers required to run the application, tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder. The path must be relative. If the compose file is specified for both platform and application, only the application one is used.
application.id string Application unique id (used also as a prefix for docker-compose created resources like volumes or networks)
application.expose docker command Ports exposed by the application in the format: "EXPOSE NN NN" Where NN are port number (ex: "EXPOSE 80 8080)
application.arg docker command Docker build arguments in the format: ARG NAME=VALUE. You can also specify multiple values. This can be useful only if you plan to use the generated dockerfile in a standalone build
application.env docker command Environment variables in the format: ENV NAME=VALUE. Multiple entries can be specified and VALUE can contain other tags (ex: ENV FOLDER="/home/dummy" FILE="filename")
application.preinstallcommands docker command Commands that will be executed during container build before any package installation. The format must be the one used in Dockerfiles. This can be used to add Debian package feeds to apt list, add security keys, etc.
application.extrapackages string Additional packages that should be installed inside the container. You can specify multiple packages separated by spaces.
application.devpackages string Development packages that will be installed in the SDK container. If a package has architecture-specific versions you’ll have to specify the correct architecture. ex: libopencv:armhf or libopencvf:aarch64
application.sdkpackages string Additional packages that will be installed in the SDK container. This can be used to install additional tools or compilers.
application.buildfiles docker command This command can be used to add additional files to the image using the ADD or COPY command. Files must be placed inside the application configuration folder.
application.buildcommands docker command Command that will be executed after having installed all packages and having configured the debugger and the services. This will give you a chance to change configuration before the actual command is executed
application.targetfiles docker command Command that will be executed at the end of the build, can be used to add files to the container (ex: providing pre-configuration for services or overriding the default configuration files)
application.targetcommands docker command Command executed when the container runs, this may be used to override execution of the application in release containers
application.appname string mnemonic name of the application, for application create using Visual Studio Code it will match folder name
application.exename string relative path (from application install folder) of the exe started when the container starts. Used only by VSCode
application.appargs string optional arguments that should be passed to the application
application.username string username used to run the container CMD. Other commands will be executed as root.
application.sdkpreinstallcommands docker command Command executed before installing packages into the SDK container can be used to add Debian feeds or keys
application.sdkpostinstallcommands docker command Command executed after devpackages and skdpackages have been installed
application.main string Used only for python application. Provides the name of the python file that container the main entry point

Both applications and platforms provide a generic entry named “props” where you can specify your own properties that will be replaced as tags using the same logic applied for standard tags.
In the extension UI, those will be referenced as "custom properties".
You can define your custom tags and use them in your dockerfile templates or inside other tags.