Edit This Page. This page provides an overview of init containers: specialized containers that run before app containers in a Pod The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
Init containers can contain utilities or setup scripts not present in an app image. You can specify init containers in the Pod specification alongside the containers array which describes app containers. A Pod The smallest and simplest Kubernetes object.
To specify an init container for a Pod, add the initContainers field into the Pod specification, as an array of objects of type Containeralongside the app containers array. The status of the init containers is returned in.
Init containers support all the fields and features of app containers, including resource limits, volumes, and security settings. However, the resource requests and limits for an init container are handled differently, as documented in Resources. Also, init containers do not support readiness probes because they must run to completion before the Pod can be ready. If you specify multiple init containers for a Pod, Kubelet runs each init container sequentially.
Each init container must succeed before the next can run. When all of the init containers have run to completion, Kubelet initializes the application containers for the Pod and runs them as usual. Because init containers have separate images from app containers, they have some advantages for start-up related code:.
Wait for a Service A way to expose an application running on a set of Pods as a network service. Clone a Git repository into a Volume A directory containing data, accessible to the containers in a pod.
Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app container. This example defines a simple Pod that has two init containers. The first waits for myserviceand the second waits for mydb.
Once both init containers complete, the Pod runs the app container from its spec section. At this point, those init containers will be waiting to discover Services named mydb and myservice.
This simple example should provide some inspiration for you to create your own init containers. During Pod startup, the kubelet delays running init containers until the networking and storage are ready. Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod restartPolicy.
A Pod cannot be Ready until all init containers have succeeded. The ports on an init container are not aggregated under a Service. A Pod that is initializing is in the Pending state but should have a condition Initialized set to true. If the Pod restartsor is restarted, all init containers must execute again.
Changes to the init container spec are limited to the container image field. Altering an init container image field is equivalent to restarting the Pod.
Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists.
Init containers have all of the fields of an app container. However, Kubernetes prohibits readinessProbe from being used because init containers cannot define readiness distinct from completion. This is enforced during validation. Use activeDeadlineSeconds on the Pod and livenessProbe on the container to prevent init containers from failing forever.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
I could not find anything useful in the system logs. Do you have any ideas why it is failing, or how I can debug it further? I found that my problem was a volume mount that had a problem. I just had this error this morning. In my case, I had a volume mount that referenced a file directly and I accidentally moved the file to a different directory. So yeah, don't do that. Skip to content. This repository has been archived by the owner. It is now read-only. Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign up. Copy link Quote reply. Host: CentOS 7. This comment has been minimized. Sign in to view. Hey man, have you made any progress on this issue? Sorry I have not touched it since.
I will have another try this evening and report back. Not sure if there was a better way to fix the issue, but I ended up recreating the container.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in. Linked pull requests.
Docker container closes prematurely
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window.I've been seeing intermittent build failures with some of our Jenkins Pipeline builds when running build steps within Docker containers. The environments in question build a Docker image from a Dockerfile on the fly, then run the build steps within an instance of the image using the docker. From what I can tell, the operations run within the container occassionally cease execution before completion.
If I sound unsure of the exact cause it's because the output produced in the build logs are largely useless. Below is an example of one of my trial cases:. While the sleep should have lasted 20 minutes the container exited about 2 minutes later, and the log indicates that the "script" exited with a return code of "-1". I can assure you that the sleep operation did not error out with a -1, but even if it did the error value reported by Jenkins would have been since the negative integer appears to be converted to an unsigned 8bit value.
So I'm guessing that something somewhere is causing the container to terminate prematurely. To make matters worse, I and run and re-run this test case dozens of times before I get the error, so it is very difficult to reproduce. Based on my preliminary review of our production systems I believe the load on the agents encountering this problem plays a part in the problem, although I'm not entirely sure how.
Perhaps running many parallel builds on the same agent all running Docker containers may be a factor, but I'm not certain. Issues Reports Components Test sessions. Log In. XML Word Printable. Type: Bug. Status: Resolved View Workflow. Priority: Major.
Description of the issue
Resolution: Duplicate. Labels: None. Environment: Jenkins 2. Similar Issues:. Any help anyone can give to help debug this problem further would be appreciated. Issue Links. Show 2 more links 2 relates to. All Comments History Activity. Hide Permalink. Kevin Phillips added a comment - Created: Updated: Resolved: GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account. Server: Version: 1. CentOS Linux release 7. Apr 13 bear. IP address" Apr 13 bear. Hint: Some lines were ellipsized, use -l to show in full.
FROM ruby Sending build context to Docker daemon Re-reading the above I noticed that I am getting a warning from dockerd in the docker system status report. Since I am on docker server version My host is a development box that is heavily used for many projects. Am not willing to reformat the file system without careful preparation. Have you ever try to use another graphdriver?
So I think it's better to reformat the file system if it's possible. To test this I changed the storage driver to vfs:. So it seems the storage driver is not be the problem here. The backing filesystem, xfs, with its faulty setup remains a likely suspect. I do plan to reformat the file system. Must take steps to assure that I do not shoot myself in the foot when I do so.
Will do a quick trial of creating an xfs partition with appropriate support. I did the original install by allowing the default partitioning. Would expect others to fall into this trap as a consequence. For anyone else reading this, vfs is not a union file system and is not appropriate for production use.
Can you see if unshare --user works? I think you must be correct. Am on CentOS 7. I suppose I will have to shift to a different linux distro. This was based on examples from the unshare man page. Variations in the unshare arguments were tried with the same result. In the original form and running as root that returns:.
My docker container exited prematurely when my computer blacked out and now it can't restart anymore. Check if the specified host path exists and is the expected type Error: failed to start containers: xxxx. Learn more. How do I restart a docker container which exited prematurely with exitCode ? Ask Question. Asked 1 year, 2 months ago. Active 1 year, 2 months ago. Viewed 2k times. Kizito Masaba Kizito Masaba 1 2 2 bronze badges. I'd docker rm the container and re-run whatever sequence started it initially.
Besides restarting container you could check following: 1. Active Oldest Votes. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. The Overflow How many jobs can be done at home? Featured on Meta. Community and Moderator guidelines for escalating issues via new response…. Feedback on Q2 Community Roadmap. Triage needs to be fixed urgently, and users need to be notified upon….
Dark Mode Beta - help us root out low-contrast and un-converted bits. Technical site integration observational experiment live on Stack Overflow. Related Hot Network Questions. Question feed.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account. Since it's CentOS, which is very security hardened by default, it might be a permissions issue. Does it work if you use setenforce 0? Maybe journalctl shows you what permissions are missing. With SELinux you'll also need to mount writable volumes with :z or :Z see selinux label eg.
Just encoutered the same kind of error on RHEL7 after upgrading the docker engine. It worked. So in the docker-compose. That makes sense indeed grimlokason! I ran into the same issue on CentOS one day. Skip to content. Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign up. New issue. Jump to bottom. Labels help wanted. Copy link Quote reply. How can I fix this, Google isn't being much help First time installing openhab with a fresh docker installation. Using docker-compose. This comment has been minimized. Sign in to view. Updated docker-compose. Perfect, thanks. I was away, but yes this has done the trick, all working fine now.
Mierdin mentioned this issue Oct 16, Sign up for free to join this conversation on GitHub.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Have a question about this project?
Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account. Additional information you deem important e. We are running nomad cluster with three clients, one of them couldn't schedule containers at some point and then we figured out that we can't run any containers even manually. Tried to upgrade docker to the latest version, Docker got upgraded, but issue persisted.
I'm running into the same issue as well. Docker 1. Without that option, containers run fine. The error looks the same otherwise though:. Server: Version: Apologies for the immature nature of my query. I still not understood what is happening IMBlues thanks for the kernel stack report. Can you grep for a line along the lines of the below in your kernel dump: kernel: runc:[1:CHILD]: page allocation failure I was curious about what it shows for order: i.
Subscribe to RSS
OS distro version and kernel number. Thanks for the details. So far this appears to be a kernel issue. I am tracking another report of the exact kernel stack as above with kernel version 3. Is anyone getting a repro of this in more recent kernels? Note that 3. Thanks for reply. Our OS team has also confirmed the issue, and will make optimization for this. If there is any good news, I'll post here.
I'm seeing the same stack trace that IMBlues posted, on some Ubuntu Kernel in this case is 3. IMBlues do you mean; removing custom patches that you addedor changes to "vanilla" RunC version? Do you have more information about the changes you made? In fact, we still using the 3.