Home DEVELOPER IoT and Edge Computing with Podman

IoT and Edge Computing with Podman

0


The limited resources in the Internet of Things (IoT) and edge computing place different demands on containerized applications than in classic server and cloud environments. The development team behind the container tool Podman, which has been available as an alternative to Docker since 2018, recognized these trends early and implemented a number of functions that enable the use of containers in edge computing.

Advertisement


  • Podman enables cloud-native application development with Kubernetes for IoT and edge computing, even on low-resource systems like the Raspberry Pi.
  • Unlike Docker, developers can efficiently integrate workloads into SystemD using tools like Podman generate SystemD and quadlet, making them more reliable and fault-tolerant.
  • Automatic updates and rollback: Since version 2.0, Podman can automatically update containers, provided they run in a systemd unit and are configured accordingly.

Podman places special emphasis on security and seamless integration into modern Linux systems. Above all, Podman’s more traditional forked execution architecture helps when deploying containers in new terrain. In addition, the tool allows you to run Kubernetes workloads, so that developers can easily program and run cloud-native applications on their local workstations without access to a Kubernetes cluster. Since Podman also works resource-efficiently – Kubernetes workloads can already be run on Raspberry Pi systems – it creates a bridge between the traditional craft of system administration and the modern cloud-native world.




Valentin Rothberg is a Senior Principal Software Engineer on Red Hat’s Container Runtimes team, working on core container technologies and tools such as Podman, Buildah, and Scopio.




Paul Holzinger is a Software Engineer on the Container Runtimes team at Red Hat and, in addition to Podman, specializes in container networking.

MongoDB 8.0 promises greater scalability through expanded sharding options

Although modern Kubernetes distributions prefer Red Hat OpenShift While setting up and running a Kubernetes cluster is comparatively easy, developing cloud-native applications is often fraught with obstacles. A Kubernetes cluster typically consists of at least three computers with powerful hardware. To enable developers to work locally on a desktop, some projects, e.g. MinikubeSlimmed-down versions of Kubernetes that run on traditional workstations.




Judge in November 2024 IX And dpunkt.verlag CLC Conference – Persistent Lifecycle/ContainerConf – at the Congress Center Rosengarten in Mannheim. Every year since 2014, the event has addressed the most important questions related to Continuous Integration (CI), Continuous Delivery (CD), Dev(sec)Ops and GitOps to provide answers, information and support for everyday project life. This time, from November 12 to 14, the CLC will focus on AI-supported DevOps, security and FinOps as well as sustainability.

Highlights of the programme

Anyone interested can register until September 23 for the early bird price of 1,049 euros registerThe workshops cost 649 euros (all prices plus VAT).

Podman goes a step further and enables Kubernetes workloads to run on low-resource microcomputers typical of IoT and edge computing. This means that cloud-native applications can not only be developed but also executed on Raspberry Pi and comparable systems. However, it does not work fully without Kubernetes. In particular, functions that rely on the computing power of multiple machines, such as replicas, are reserved for container platforms. All the features provided by Podman are included in a support matrix Podman Documentation Listed.

A guestbook (see Figure 1) will serve as an example of a containerized application for Kubernetes. Running the guestbook in Listing 1 requires two containers: a Redis container for the database and a second container for the web frontend running on port 8080.



The guestbook can be accessed at localhost:8080 (Figure 1).

apiVersion: v1
kind: Pod
metadata:
  name: guestbook
spec:
  containers:
  - name: backend
image: "docker.io/redis:6"
ports:
- containerPort: 6379
  - name: frontend
image: gcr.io/google_samples/gb-frontend:v6
privileged: true
ports:
- containerPort: 80
  hostPort: 8080
env:
- name: GET_HOSTS_FROM
  value: "env"
- name: REDIS_SLAVE_SERVICE_HOST
  value: "guestbook-backend"

List 1: Guestbook

The YAML file can be downloaded from podman kube play Execute. Then two containers run in a pod – plus an infrastructure container (see Listing 2).

$ podman kube play guestbook.yaml
Pod:
4d9511ab6f087469cd841885cdba5fd3f36256774a3717f603e26e824acd12e2
Containers:
27649cdbb7627e694359fcd2dc2b6f4659063803610ffa5331373aa29d9b6420
14590a970defcfe70a19eb89660855e5df0a9c35f4c6fb39e227b8d0cb36822a

$ podman pod ps
POD ID      NAME     STATUS  CREATED      INFRA ID     # OF CONTAINERS
4d9511ab6f08  guestbook  Running 25 seconds ago  882814e1cd21  3

$ podman container ps
CONTAINER ID  IMAGE                                COMMAND           CREATED     STATUS     PORTS                                     NAMES
882814e1cd21  localhost/podman-pause:4.5.1-1685123928                    39 seconds ago  Up 37 seconds  0.0.0.0:6379->6379/tcp, 0.0.0.0:8080->80/tcp  4d9511ab6f08-infra
27649cdbb762  docker.io/library/redis:6            redis-server      38 seconds ago  Up 37 seconds  0.0.0.0:6379->6379/tcp, 0.0.0.0:8080->80/tcp  guestbook-backend
14590a970def  gcr.io/google_samples/gb-frontend:v6 apache2-foregroun...  37 seconds ago  Up 37 seconds  0.0.0.0:6379->6379/tcp, 0.0.0.0:8080->80/tcp  guestbook-frontend

Listing 2: Container for the guest book

If needed, containers and pods can be accessed using the command podman kube down Stop and remove it.

The Podman development team constantly builds on Kubernetes YAML and thus pursues two goals: on the one hand, it should be easier to develop cloud-native applications because Podman creates a bridge between the local workstations of software developers and productive operations on the Kubernetes cluster. On the other hand, Podman prefers to rely on open standards – and Kubernetes-YAML has already established itself as the de facto standard for containerized workloads. The vision of the Podman development team is to use Kubernetes-YAML not only to run cloud-native applications on clusters, but also to make it the standard in many other environments: from Raspberry Pi microcomputers to local workstations to embedded systems in industry and satellites.

Podman is primarily focused on Linux, but its functions can be Also on Windows and macOS via Podman Desktop to use. Version 1.0 of the tool, which focuses primarily on deeper integration with Kubernetes, is available from May 2023.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version