isula-build: sync upstream patches

Signed-off-by: DCCooper <1866858@gmail.com>
(cherry picked from commit 4e509c26e0bace00e01cde6a8a6eb50e418a4e77)
This commit is contained in:
DCCooper 2021-12-08 17:57:40 +08:00 committed by openeuler-sync-bot
parent 7e3735aaf9
commit f0037c1207
10 changed files with 1097 additions and 3 deletions

View File

@ -1 +1 @@
0.9.5-21
0.9.5-22

View File

@ -1 +1 @@
ec7c7a741944af0725c3446c6fe09513269a18c7
0578a0f7daf2a8845d8d221fc3b3f3bdd4964d32

View File

@ -2,7 +2,7 @@
Name: isula-build
Version: 0.9.5
Release: 21
Release: 22
Summary: A tool to build container images
License: Mulan PSL V2
URL: https://gitee.com/openeuler/isula-build
@ -85,6 +85,12 @@ fi
/usr/share/bash-completion/completions/isula-build
%changelog
* Wed Dec 08 2021 DCCooper <1866858@gmail.com> - 0.9.5-22
- Type:bugfix
- CVE:NA
- SUG:restart
- DESC:sync upstream patches
* Wed Nov 17 2021 jingxiaolu <lujingxiao@huawei.com> - 0.9.5-21
- Type:enhancement
- CVE:NA

View File

@ -0,0 +1,208 @@
From 4be900104bbab7719c033e2c2b711bf62296d190 Mon Sep 17 00:00:00 2001
From: zwy <irenezwy@163.com>
Date: Fri, 29 Oct 2021 07:22:03 +0000
Subject: [PATCH 11/29] update README.md. Revised the English Readme file base
its Chinese version.
---
README.md | 90 +++++++++++++++++++++++++++++++--------------------------------
1 file changed, 45 insertions(+), 45 deletions(-)
diff --git a/README.md b/README.md
index 1f72e4c..8c09b1b 100644
--- a/README.md
+++ b/README.md
@@ -1,45 +1,45 @@
# isula-build
-isula-build is a tool provided by iSula team for building container images. It can quickly build the container image according to the given `Dockerfile`.
+isula-build is a tool provided by the iSula team for building container images. It can quickly build a container image based on a given `Dockerfile`.
-The binary file `isula-build` is a CLI tool and `isula-builder` runs as a daemon responding all the requests from client.
+The tool adopts the server + client mode. The binary file `isula-build` is the client that provides a CLI for building and managing images, while `isula-builder` is the server that runs as a daemon in the background, responding all the requests from client.
-It provides a command line tool that can be used to
+You can use the CLI to
-- build an image from a Dockerfile(build)
-- list all images in local store(image)
-- import a basic container image(import)
-- load image layers(load)
-- remove specified images(rm)
-- exporting images layers(save)
-- tag local images(tag)
-- pull image from remote repository(pull)
-- push image to remote repository(push)
-- view operating environment and system info(info)
-- login remote image repository(login)
-- logout remote image repository(logout)
-- query isula-build version(version)
+- Build an image from a Dockerfile (build).
+- List all images in local store (image).
+- Import container base images (import).
+- Load layered images (load).
+- Remove local persistent images (rm).
+- Export layered images (save).
+- Tag local persistent images (tag).
+- Pull images from a remote repository (pull).
+- Push images to a remote repository (push).
+- View operating environment and system information (info).
+- Log in to a remote image repository (login).
+- Log out of a remote image repository (logout).
+- Query isula-build version (version).
-We also
+In addition, the following capabilities are also provided:
-- be compatible with Dockerfile grammar
-- support extended file attributes, e.g., linux security, IMA, EVM, user, trusted
-- support different image formats, e.g., docker-archive, isulad
+- Dockerfile compatible syntax.
+- Support for extended file attributes, such as linux security, IMA, EVM, user, and trusted.
+- Support for export of different image formats, for example, docker-archive, iSulad.
## Documentation
-- [guide](./doc/manual_en.md).
-- [more usage guide](./doc/manual_en.md#usage-guidelines).
+- [Container Image Building](./doc/manual_en.md)
+- [Usage Guidelines](./doc/manual_en.md#usage-guidelines)
## Getting Started
-### Install on openEuler
+### Installation on openEuler
-#### Install from source
+#### Install from source.
For compiling from source on openEuler, these packages are required on your OS:
- make
-- golang (version 1.13 or higher)
+- golang (version 1.13 or later)
- btrfs-progs-devel
- device-mapper-devel
- glib2-devel
@@ -75,9 +75,9 @@ After compiling success, you can install the binaries and default configuration
sudo make install
```
-#### Install as RPM package
+#### Install as RPM package.
-`isula-build` is now released with update pack of openEuler 20.03 LTS, you can install it by the help of yum or rpm. Before you install, please enable "update" in repo file.
+`isula-build` is now released with update pack of openEuler 20.03 LTS, you can install it using yum or rpm. Before you install, please enable "update" in the repo file.
##### With `yum`
@@ -85,21 +85,21 @@ sudo make install
sudo yum install -y isula-build
```
-**NOTE**: Please make sure "update" part of your yum configuration is enabled.
+**NOTE**: Please make sure the "update" part of your yum configuration is enabled. You can download the source of yum from [openEuler repo list](https://repo.openeuler.org/) and install it.
##### With `rpm`
-you can download it from [openEuler's yum repo of update](https://repo.openeuler.org/) to your local machine, and intall it with such command:
+You can download the RPM package of isula-build and intall it.
```sh
sudo rpm -ivh isula-build-*.rpm
```
-### Run the daemon server
+### Running the Daemon Server
-#### Run as system service
+#### Run as the system service.
-To manage `isula-builder` by systemd, please refer to following steps:
+To manage `isula-build` by systemd, please refer to following steps:
```sh
sudo install -p -m 640 ./isula-build.service /etc/systemd/system/isula-build.service
@@ -107,20 +107,20 @@ sudo systemctl enable isula-build
sudo systemctl start isula-build
```
-#### Directly running isula-builder
-You can also run the isula-builder command on the server to start the service.
+#### Directly run the isula-builder binary file.
+You can also run the isula-builder binary file on the server to start the service.
```sh
sudo isula-builder --dataroot="/var/lib/isula-build"
```
-### Example on building container images
+### Example on Building Container Images
-#### Requirements
+#### Prerequisites
For building container images, `runc` is required.
-You can get `runc` by the help of installing `docker` or `docker-runc` on your openEuler distro by:
+You can get `runc` by installing `docker` or `docker-runc` on your openEuler distro:
```sh
sudo yum install docker
@@ -132,9 +132,9 @@ or
sudo yum install docker-runc
```
-#### Building image
+#### Build an image.
-Here is an example for building a container image, for more details please refer to [usage](./doc/manual_en.md#usage-guidelines).
+Here is an example for building a container image, for more details please refer to [Usage Guidelines](./doc/manual_en.md#usage-guidelines).
Create a simple buildDir and write the Dockerfile
@@ -144,7 +144,7 @@ LABEL foo=bar
COPY ./* /home/dir1/
```
-Build the image in the buildDir
+Build the image in the buildDir.
```sh
$ sudo isula-build ctr-img build -f Dockerfile .
@@ -160,7 +160,7 @@ Storing signatures
Build success with image id: 9ec92a8819f9da1b06ea9ff83307ff859af2959b70bfab101f6a325b1a211549
```
-#### Listing images
+#### List local images.
```sh
$ sudo isula-build ctr-img images
@@ -170,20 +170,20 @@ $ sudo isula-build ctr-img images
<none> latest 9ec92a8819f9 2020-06-11 07:45:39.265106109 +0000 UTC
```
-#### Removing image
+### Removing Images
```sh
$ sudo isula-build ctr-img rm 9ec92a8819f9
Deleted: sha256:86567f7a01b04c662a9657aac436e8d63ecebb26da4252abb016d177721fa11b
```
-### Integrates with iSulad or docker
+### Integration with iSulad or Docker
-Integrates with `iSulad` or `docker` are listed in [integration](./doc/manual_en.md#directly-integrating-a-container-engine).
+Integration with `iSulad` or `docker` are listed in [integration](./doc/manual_en.md#directly-integrating-a-container-engine).
## Precautions
-Constraints, limitations and the differences from `docker build` are listed in [precautions](./doc/manual_en.md#precautions).
+Constraints, limitations, and differences from `docker build` are listed in [precautions](./doc/manual_en.md#precautions).
## How to Contribute
--
1.8.3.1

View File

@ -0,0 +1,200 @@
From a1becee58561451daea4bd69989b0806cfa9ceab Mon Sep 17 00:00:00 2001
From: DCCooper <1866858@gmail.com>
Date: Tue, 16 Nov 2021 17:31:58 +0800
Subject: [PATCH 22/29] perf:use bufio reader instead ioutil.ReadFile
reason: read file with fixed chunk size instead of
the whole file into memory cause memory pressure
Signed-off-by: DCCooper <1866858@gmail.com>
---
util/cipher.go | 38 +++++++++++++++++++-----
util/cipher_test.go | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 114 insertions(+), 8 deletions(-)
diff --git a/util/cipher.go b/util/cipher.go
index ecbbc47..a5e3125 100644
--- a/util/cipher.go
+++ b/util/cipher.go
@@ -14,6 +14,7 @@
package util
import (
+ "bufio"
"crypto"
"crypto/aes"
"crypto/cipher"
@@ -234,6 +235,33 @@ func ReadPublicKey(path string) (rsa.PublicKey, error) {
return *key, nil
}
+func checkSumReader(path string) (string, error) {
+ const bufferSize = 32 * 1024 // 32KB
+
+ file, err := os.Open(filepath.Clean(path))
+ if err != nil {
+ return "", errors.Wrapf(err, "hash file failed")
+ }
+ defer func() {
+ if cErr := file.Close(); cErr != nil && err == nil {
+ err = cErr
+ }
+ }()
+ buf := make([]byte, bufferSize)
+ reader := bufio.NewReader(file)
+ hasher := sha256.New()
+ for {
+ switch n, err := reader.Read(buf); err {
+ case nil:
+ hasher.Write(buf[:n])
+ case io.EOF:
+ return fmt.Sprintf("%x", hasher.Sum(nil)), nil
+ default:
+ return "", err
+ }
+ }
+}
+
func hashFile(path string) (string, error) {
cleanPath := filepath.Clean(path)
if f, err := os.Stat(cleanPath); err != nil {
@@ -242,12 +270,7 @@ func hashFile(path string) (string, error) {
return "", errors.New("failed to hash directory")
}
- file, err := ioutil.ReadFile(cleanPath) // nolint:gosec
- if err != nil {
- return "", errors.Wrapf(err, "hash file failed")
- }
-
- return fmt.Sprintf("%x", sha256.Sum256(file)), nil
+ return checkSumReader(path)
}
func hashDir(path string) (string, error) {
@@ -261,11 +284,10 @@ func hashDir(path string) (string, error) {
return nil
}
if !info.IsDir() {
- f, err := ioutil.ReadFile(cleanPath) // nolint:gosec
+ fileHash, err := hashFile(cleanPath)
if err != nil {
return err
}
- fileHash := fmt.Sprintf("%x", sha256.Sum256(f))
checkSum = fmt.Sprintf("%s%s", checkSum, fileHash)
}
return nil
diff --git a/util/cipher_test.go b/util/cipher_test.go
index bab6dfe..4bbe894 100644
--- a/util/cipher_test.go
+++ b/util/cipher_test.go
@@ -15,10 +15,13 @@ package util
import (
"crypto"
+ "crypto/rand"
"crypto/sha1"
"crypto/sha256"
"crypto/sha512"
+ "fmt"
"hash"
+ "io"
"io/ioutil"
"os"
"path/filepath"
@@ -31,6 +34,9 @@ import (
)
const (
+ sizeKB = 1024
+ sizeMB = 1024 * sizeKB
+ sizeGB = 1024 * sizeMB
maxRepeatTime = 1000000
)
@@ -453,3 +459,81 @@ func TestCheckSum(t *testing.T) {
})
}
}
+
+func createFileWithSize(path string, size int) error {
+ file, err := os.Create(path)
+ if err != nil {
+ return err
+ }
+ _, err = io.CopyN(file, rand.Reader, int64(size))
+ return err
+}
+
+func benchmarkSHA256SumWithFileSize(b *testing.B, fileSize int) {
+ b.ReportAllocs()
+ filepath := fs.NewFile(b, b.Name())
+ defer filepath.Remove()
+ _ = createFileWithSize(filepath.Path(), fileSize)
+ b.ResetTimer()
+ for n := 0; n < b.N; n++ {
+ _, _ = SHA256Sum(filepath.Path())
+ }
+}
+
+func BenchmarkSHA256Sum(b *testing.B) {
+ tests := []struct {
+ fileSuffix string
+ fileSize int
+ }{
+ {fileSuffix: "100MB", fileSize: 100 * sizeMB},
+ {fileSuffix: "200MB", fileSize: 200 * sizeMB},
+ {fileSuffix: "500MB", fileSize: 500 * sizeMB},
+ {fileSuffix: "1GB", fileSize: 1 * sizeGB},
+ {fileSuffix: "2GB", fileSize: 2 * sizeGB},
+ {fileSuffix: "4GB", fileSize: 4 * sizeGB},
+ {fileSuffix: "8GB", fileSize: 8 * sizeGB},
+ }
+
+ for _, t := range tests {
+ name := fmt.Sprintf("BenchmarkSHA256SumWithFileSize_%s", t.fileSuffix)
+ b.Run(name, func(b *testing.B) {
+ benchmarkSHA256SumWithFileSize(b, t.fileSize)
+ })
+ }
+}
+
+func TestCreateFileWithSize(t *testing.T) {
+ newFile := fs.NewFile(t, t.Name())
+ defer newFile.Remove()
+ type args struct {
+ path string
+ size int
+ }
+ tests := []struct {
+ name string
+ args args
+ wantErr bool
+ }{
+ {
+ name: "TC-generate 500MB file",
+ args: args{
+ path: newFile.Path(),
+ size: 500 * sizeMB,
+ },
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ err := createFileWithSize(tt.args.path, tt.args.size)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("createFileWithSize() error = %v, wantErr %v", err, tt.wantErr)
+ }
+ if err == nil {
+ file, _ := os.Stat(tt.args.path)
+ if file.Size() != int64(tt.args.size) {
+ t.Errorf("createFileWithSize() size = %v, actually %v", tt.args.size, file.Size())
+ }
+ }
+ })
+ }
+}
--
1.8.3.1

View File

@ -0,0 +1,196 @@
From 5895bd148306694da6b17fbf20eb513269a676e8 Mon Sep 17 00:00:00 2001
From: DCCooper <1866858@gmail.com>
Date: Sat, 27 Nov 2021 15:16:14 +0800
Subject: [PATCH 23/29] doc: add documents for separated relative feature
Signed-off-by: DCCooper <1866858@gmail.com>
---
doc/manual_en.md | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
doc/manual_zh.md | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 146 insertions(+)
diff --git a/doc/manual_en.md b/doc/manual_en.md
index 3064c17..8fcb333 100644
--- a/doc/manual_en.md
+++ b/doc/manual_en.md
@@ -529,6 +529,33 @@ Loaded image as c07ddb44daa97e9e8d2d68316b296cc9343ab5f3d2babc5e6e03b80cd580478e
> - isula-build allows you to import a container image with a maximum size of 50 GB.
> - isula-build automatically recgonizes the image format and loads it from the image layers file.
+#### load: Importing Separated Images
+
+The isula-build ctr-img load command is used to assemble a complete image that is exported by layer and load the image to the system.
+
+The command prototype is as follows:
+
+```
+isula-build ctr-img load -d IMAGES_DIR [-b BASE_IMAGE] [-l LIB_IMAGE] -i APP_IMAGE
+```
+
+IMAGE: name of the application image to be imported: TAG (it cannot be the image ID).
+
+The following Flags are supported:
+
+- -d: Specifies the folder where the application layer image is stored. This parameter is mandatory. The folder contains at least the app image and complete manifest file. You can store the files at the base layer and lib layer separately and specify them by using the -b and -l parameters.
+- -b: specifies the path of the image at the base layer. This parameter is optional. If this parameter is not specified, the path specified by -d is used by default.
+- -l: specifies the path of the image at the lib layer. This parameter is optional. If this parameter is not specified, the path specified by -d is used by default.
+- -i: Specifies the name of the application image to be imported. This parameter is mandatory.
+- no-check: skips SHA256 verification. This parameter is optional.
+
+> **Note:**
+>
+> - You need to enter the image name parameter. The value of Image_NAME:TAG must be used to specify a unique image. If Image_ID is used or no tag is added, multiple images may be mapped, or the same image may have different IDs during the import and export process. As a result, the execution result deviates from the user's expectation.
+> - When -no-check is used, the sha256 checksum of the tarball is skipped. Abandoning the checksum checksum check on tarballs may introduce uncertainties. Users need to be clear and accept the possible impact and consequences of such actions.
+> - The capacity of the isula-build running directory /var/lib/isula-build/ must be at least twice the total size of the tiered mirror. If you want to store images A (10 MB), B (20 MB), and C (30 MB), ensure that the size of the disk where /var/lib/isula-build resides is 120 MB (2 x (10 + 20 + 30)).
+> - When a hierarchical image is saved or loaded, the file needs to be read into the memory when the SHA256 value of the file is calculated. Therefore, linear memory consumption occurs when concurrent operations are performed.
+
#### rm: Deleting a Local Persistent Image
You can run the rm command to delete an image from the local persistent storage. The command is as follows:
@@ -615,6 +642,51 @@ Save success with image: [busybox:latest nginx:latest]
>- Save exports an image in .tar format by default. If necessary, you can save the image and then manually compress it.
>- When exporting an image using image name, specify the entire image name with format: REPOSITORY:TAG.
+#### save: Exporting Separated Images
+
+The isula-build ctr-img save command can be used to export base/lib/app layers. If multiple application layers depend on the same base and lib, only one copy is exported. If -d is not used to specify the destination directory, the exported base/lib/app image package is saved in the Imagesimages directory.
+
+The command prototype is as follows:
+
+```
+isula-build ctr-img save -b BASE_IMAGE:TAG [-l LIB_IMAGE:TAG] [-r rename.json] [ -d DST_DIR] IMAGE [IMAGE…]
+```
+
+IMAGE: name of the application image to be exported: TAG (it cannot be the image ID). You can export multiple application images with the same base/lib at the same time.
+
+The following Flags are supported:
+
+- -b, --base: mandatory. Specifies the image tag at the base layer, for example, euleros:latest. This parameter is mandatory. It is used to check whether the base image is the same as the base image in the app. The image name can contain a maximum of 255 characters (a-z0-9-*./). The tag name can contain a maximum of 128 characters (same as Docker).
+- -l, --lib: optional. Specifies the image at the lib layer, for example, euleros:libfoo. This parameter is optional. If there is no lib layer in actual applications, this parameter is optional.
+
+- -d: This parameter is optional. This parameter is mandatory to ensure that the directory for storing hierarchical images obtained by concurrent processes does not conflict. Specifies the directory for saving the exported results. The directory must be empty. If save is executed concurrently, ensure that the directory name is unique. Otherwise, the saved image may be incomplete or incorrect.
+
+- -r: specifies the name description file of the exported image .tar package. The file is in JSON format. If this parameter is not specified, the name of the exported app-layer image is "ImageName_tag_app_image.tar.gz" by default. The default image at the lib layer is "ImageName_tag_lib_image.tar.gz". The default value of the Base layer image is "ImageName_tag_base_image.tar.gz".
+
+If you need to rename the file, create the corresponding JSON file as prompted. The format of the JSON file is as follows:
+
+```
+[
+ { "name": "repo_tag_app_image.tar.gz",
+ "rename": "some_app_image.tar.gz"
+ }
+ …
+]
+```
+
+> **Note:**
+>
+> - When saving a hierarchical image, specify the image name instead of the image ID. Otherwise, an error will be reported.
+> - When saving a layered image, ensure that the base image has only one layer and -b must specify an image.
+> - When saving a hierarchical image, you need to specify the directory (-d) for storing the hierarchical image. If this directory is not specified, the Images folder in the current directory is used.
+> - When saving a layered image, ensure that the directory for storing the layered image is empty. Otherwise, an error is reported.
+> - A manifest file is generated when a layered image is saved. The manifest file records the name and sha256sum of the compressed package of each layered image. During loading, the sha256sum of each compressed package is verified to prevent incorrect use.
+> - If the lib layer is not available in the actual application scenario, you do not need to add the -l parameter.
+> - The app image must be the same as the base/lib image.
+> - You need to enter the image name parameter. The value of Image_NAME:TAG must be used to specify a unique image. If Image_ID is used or no tag is added, multiple images may be mapped, or the same image may have different IDs during the import and export process. As a result, the execution result deviates from the user's expectation.
+> - When multiple images are layered, if these images have the same lib layer, specify the name of the lib layer image. Otherwise, the saving fails.
+> - The capacity of the isula-build running directory /var/lib/isula-build/ must be at least twice the total size of the tiered mirror. If you want to store images A (10 MB), B (20 MB), and C (30 MB), ensure that the size of the disk where /var/lib/isula-build resides is 120 MB (2 x (10 + 20 + 30)).
+> - When a hierarchical image is saved or loaded, the file needs to be read into the memory when the SHA256 value of the file is calculated. Therefore, linear memory consumption occurs when concurrent operations are performed.
#### tag: Tagging Local Persistent Images
diff --git a/doc/manual_zh.md b/doc/manual_zh.md
index 8104305..e68c77e 100644
--- a/doc/manual_zh.md
+++ b/doc/manual_zh.md
@@ -526,6 +526,33 @@ Loaded image as c07ddb44daa97e9e8d2d68316b296cc9343ab5f3d2babc5e6e03b80cd580478e
> - isula-build 支持导入最大50G的容器层叠镜像。
> - isula-build 会自动识别容器层叠镜像的格式并进行导入。
+#### load: 导入分层镜像
+
+isula-build ctr-img load可以将isula-build ctr-img save分层导出的镜像拼装回完整的镜像并load到系统中。
+
+命令原型如下:
+
+```
+isula-build ctr-img load -d IMAGES_DIR [-b BASE_IMAGE] [-l LIB_IMAGE] -i APP_IMAGE
+```
+
+IMAGE需要导入的应用镜像名:TAG不能是镜像ID
+
+支持如下Flags
+
+- -d必选指定应用分层镜像所在的文件夹。文件夹中至少包含app镜像和完整的manifest文件。可以将base层和lib层文件分别存放然后通过-b和-l参数指定。
+- -b可选指定base层镜像的路径。如果不指定默认在-d指定的路径中。
+- -l可选指定lib层镜像的路径。如果不指定默认在-d指定的路径中。
+- -i必选指定需要导入的应用镜像名字。
+- no-check可选跳过sha256校验。
+
+> **说明:**
+>
+> - 需要输入镜像名的参数要使用IMAGE_NAME:TAG的方式指明唯一的镜像因为使用IMAGE_ID或不加TAG可能对应多个镜像或者在导入导出过程中相同的镜像会有不同的ID导致偏离用户预期的执行结果。
+> - 使用no-check时会跳过对tarball的sha256校验和检查。放弃对tarball进行校验和检查可能引入不确定因素用户需明确和接受此类行为可能带来的影响和结果。
+> - 由于涉及中间状态转换、保存isula-build运行目录/var/lib/isula-build/需保证容量至少为需要进行分层镜像总大小的两倍。假设需要对A10MB, B20MB, C30MB 三个镜像进行保存分层镜像,则需要保证/var/lib/isula-build所在磁盘大小为2*(10+20+30)=120M。
+> - 在保存、加载分层镜像时在计算文件的sha256值时需要将文件读取进入内存中故并发操作时会有线性内存消耗。
+
#### rm: 删除本地持久化镜像
可通过rm命令删除当前本地持久化存储的镜像。命令原型为
@@ -611,6 +638,53 @@ Save success with image: [busybox:latest nginx:latest]
> - save 导出的镜像默认格式为未压缩的tar格式如有需求用户可以再save之后手动压缩。
> - 在使用镜像名导出镜像时需要给出完整的镜像名格式REPOSITORY:TAG。
+#### save: 导出分层镜像
+
+isula-build ctr-img save可以将base/lib/app分层导出且如果多个app层依赖相同的base和lib只会导出一份。如果不用-d指定导出的目标目录导出的base/lib/app镜像包会被保存在Images目录下。
+
+命令原型如下:
+
+```
+isula-build ctr-img save -b BASE_IMAGE:TAG [-l LIB_IMAGE:TAG] [-r rename.json] [ -d DST_DIR] IMAGE [IMAGE…]
+```
+
+IMAGE需要导出的应用镜像名:TAG不能是镜像ID。可以同时导出多个base/lib相同的应用镜像。
+
+支持如下Flags:
+
+- -b, --base必选。指定base层镜像tag例如euleros:latest。这个参数是必选的用于比对base镜像和app中的基础镜像是否相同。镜像名允许[a-z0-9-*./]最大长度为255tag名允许[a-z0-9-*.]最大长度为128与docker相同
+
+- -l, --lib可选。指定lib层镜像例如的euleros:libfoo。这个参数是可选的如果实际应用中没有lib层可以不加该参数。
+
+- -d可选如果是并发执行为了保证并发进程得到的分层镜像保存目录不冲突该参数必选。指定导出结果的保存目录。该目录必须为空目录且如果save是并发执行的需要用户自己保证该目录名称不可重复否则保存的镜像会不完整或有错误。
+
+- -r指定对导出镜像tar压缩包的重命名描述文件json格式。 如果不加该参数则导出的app层镜像名默认为“镜像名_tag_app_image.tar.gz”lib层镜像默认为“镜像名_tag_lib_image.tar.gz”base层镜像默认为“镜像名_tag_base_image.tar.gz”。
+
+ 如果需要进行重命名则根据提示创建相应的json文件。json文件的格式如下
+
+ ```
+ [
+ { "name": "repo_tag_app_image.tar.gz",
+ "rename": "some_app_image.tar.gz"
+ }
+ …
+ ]
+ ```
+
+> **说明:**
+>
+> - 在保存分层镜像时需指定镜像名称而非镜像ID否则会报错。
+> - 在保存分层镜像时需要确保base镜像只有一层且-b必须指定镜像。
+> - 在保存分层镜像时,需指定分层镜像保存的目录(-d)如果未指定则使用当前目录下的Images文件夹。
+> - 在保存分层镜像时,需要确定分层镜像保存的目录为空,否则报错。
+> - 保存分层镜像时会生成一个 manifest 文件里面记录每个分层镜像的压缩包名称及sha256sum加载时会校验每个压缩包的 sha256sum 以免被错误使用。
+> - 如果实际应用场景没有lib层则不需要增加-l参数。
+> - app镜像必须为base/lib相同的镜像。
+> - 需要输入镜像名的参数要使用IMAGE_NAME:TAG的方式指明唯一的镜像因为使用IMAGE_ID或不加TAG可能对应多个镜像或者在导入导出过程中相同的镜像会有不同的ID导致偏离用户预期的执行结果。
+> - 当对多个镜像进行分层时如果这些镜像都拥有相同的lib层需指明lib层镜像的名称否则保存失败。
+> - 由于涉及中间状态转换、保存isula-build运行目录/var/lib/isula-build/需保证容量至少为需要进行分层镜像总大小的两倍。假设需要对A10MB, B20MB, C30MB 三个镜像进行保存分层镜像,则需要保证/var/lib/isula-build所在磁盘大小为2*(10+20+30)=120M
+> - 在保存、加载分层镜像时在计算文件的sha256值时需要将文件读取进入内存中故并发操作时会有线性内存消耗。
+
#### tag: 给本地持久化镜像打标签
--
1.8.3.1

View File

@ -0,0 +1,29 @@
From 0aa3f0bda673bc3defd9990e71507aa39f6fcb55 Mon Sep 17 00:00:00 2001
From: jingxiaolu <lujingxiao@huawei.com>
Date: Tue, 30 Nov 2021 10:45:14 +0800
Subject: [PATCH 27/29] tests: fixes make test-unit-cover not generates cover
files
Fixes: #I4KDKL
Signed-off-by: jingxiaolu <lujingxiao@huawei.com>
---
hack/unit_test.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hack/unit_test.sh b/hack/unit_test.sh
index b6a7978..0237605 100755
--- a/hack/unit_test.sh
+++ b/hack/unit_test.sh
@@ -62,7 +62,7 @@ function run_unit_test() {
fi
# TEST_ARGS is " -args SKIP_REG=foo", so no double quote for it
# shellcheck disable=SC2086
- go test -v "${go_test_race_flag}" "${go_test_mod_method}" "${go_test_coverprofile_flag}" "${go_test_covermode_flag}" -coverpkg=${package} "${go_test_count_method}" "${go_test_timeout_flag}" "${package}" ${TEST_ARGS} >> "${testlog}"
+ go test -v ${go_test_race_flag} "${go_test_mod_method}" "${go_test_coverprofile_flag}" "${go_test_covermode_flag}" -coverpkg=${package} "${go_test_count_method}" "${go_test_timeout_flag}" "${package}" ${TEST_ARGS} >> "${testlog}"
done
if grep -E -- "--- FAIL:|^FAIL" "${testlog}"; then
--
1.8.3.1

View File

@ -0,0 +1,99 @@
From 1e56fb7d42b3a91ed7b11485d65dd52b12012a81 Mon Sep 17 00:00:00 2001
From: DCCooper <1866858@gmail.com>
Date: Wed, 8 Dec 2021 12:51:03 +0800
Subject: [PATCH 28/29] test: fix go test failed but show success
Signed-off-by: DCCooper <1866858@gmail.com>
---
cmd/daemon/main_test.go | 23 +++++++++++++----------
hack/unit_test.sh | 7 ++++---
2 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/cmd/daemon/main_test.go b/cmd/daemon/main_test.go
index d98ea83..3947f7a 100644
--- a/cmd/daemon/main_test.go
+++ b/cmd/daemon/main_test.go
@@ -14,6 +14,7 @@
package main
import (
+ "fmt"
"io/ioutil"
"os"
"testing"
@@ -143,7 +144,7 @@ func TestRunAndDataRootSet(t *testing.T) {
if err != nil {
t.Fatalf("get default store options failed with error: %v", err)
}
-
+
var storeOpt store.DaemonStoreOptions
storeOpt.RunRoot = option.RunRoot
storeOpt.DataRoot = option.GraphRoot
@@ -158,6 +159,15 @@ func TestRunAndDataRootSet(t *testing.T) {
expectation store.DaemonStoreOptions
}{
{
+ // first run so can not be affected by other testcase
+ name: "TC3 - all not set",
+ setF: setStorage("[storage]\ndriver = \"overlay\""),
+ expectation: store.DaemonStoreOptions{
+ DataRoot: "/var/lib/containers/storage",
+ RunRoot: "/var/run/containers/storage",
+ },
+ },
+ {
name: "TC1 - cmd set, configuration and storage not set",
setF: func() {
cmd.PersistentFlags().Set("runroot", runRoot.Path())
@@ -176,17 +186,10 @@ func TestRunAndDataRootSet(t *testing.T) {
expectation: result,
},
{
- name: "TC3 - all not set",
- setF: setStorage("[storage]"),
- expectation: store.DaemonStoreOptions{
- DataRoot: "/var/lib/containers/storage",
- RunRoot: "/var/run/containers/storage",
- },
- },
- {
name: "TC4 - cmd and configuration not set, storage set",
setF: func() {
- config := "[storage]\nrunroot = \"" + runRoot.Join("storage") + "\"\ngraphroot = \"" + dataRoot.Join("storage") + "\""
+ config := fmt.Sprintf("[storage]\ndriver = \"%s\"\nrunroot = \"%s\"\ngraphroot = \"%s\"\n",
+ "overlay", runRoot.Join("storage"), dataRoot.Join("storage"))
sT := setStorage(config)
sT()
},
diff --git a/hack/unit_test.sh b/hack/unit_test.sh
index 0237605..e13bca3 100755
--- a/hack/unit_test.sh
+++ b/hack/unit_test.sh
@@ -63,14 +63,13 @@ function run_unit_test() {
# TEST_ARGS is " -args SKIP_REG=foo", so no double quote for it
# shellcheck disable=SC2086
go test -v ${go_test_race_flag} "${go_test_mod_method}" "${go_test_coverprofile_flag}" "${go_test_covermode_flag}" -coverpkg=${package} "${go_test_count_method}" "${go_test_timeout_flag}" "${package}" ${TEST_ARGS} >> "${testlog}"
+ grep "^[?|ok].*${package}" "${testlog}"
done
if grep -E -- "--- FAIL:|^FAIL" "${testlog}"; then
echo "Testing failed... Please check ${testlog}"
+ return 1
fi
- tail -n 1 "${testlog}"
-
- rm -f "${testlog}"
}
function generate_unit_test_coverage() {
@@ -82,4 +81,6 @@ function generate_unit_test_coverage() {
precheck
run_unit_test
+exit_flag=$?
generate_unit_test_coverage
+exit $exit_flag
--
1.8.3.1

View File

@ -0,0 +1,350 @@
From a3638072985a0cb71ff561ad5e5bbc2454f81c1f Mon Sep 17 00:00:00 2001
From: DCCooper <1866858@gmail.com>
Date: Wed, 8 Dec 2021 12:51:20 +0800
Subject: [PATCH 29/29] isula-build: fix problems found by code review
Signed-off-by: DCCooper <1866858@gmail.com>
---
daemon/load.go | 67 ++++++++++++++++++++++++-------------------------
daemon/save.go | 79 ++++++++++++++++++++--------------------------------------
image/image.go | 3 ++-
util/cipher.go | 8 +++++-
4 files changed, 69 insertions(+), 88 deletions(-)
diff --git a/daemon/load.go b/daemon/load.go
index 378325c..894159b 100644
--- a/daemon/load.go
+++ b/daemon/load.go
@@ -69,9 +69,9 @@ type separatorLoad struct {
}
type loadOptions struct {
+ logEntry *logrus.Entry
path string
format string
- logEntry *logrus.Entry
sep separatorLoad
}
@@ -355,7 +355,7 @@ func (s *separatorLoad) getTarballInfo() error {
return errors.Wrap(err, "join manifest file path failed")
}
- var t = make(map[string]tarballInfo)
+ var t = make(map[string]tarballInfo, 1)
if err = util.LoadJSONFile(manifest, &t); err != nil {
return errors.Wrap(err, "load manifest file failed")
}
@@ -370,7 +370,7 @@ func (s *separatorLoad) getTarballInfo() error {
}
func (s *separatorLoad) constructTarballInfo() (err error) {
- s.log.Infof("construct image tarball info for %s", s.appName)
+ s.log.Infof("Construct image tarball info for %s", s.appName)
// fill up path for separator
// this case should not happened since client side already check this flag
if len(s.appName) == 0 {
@@ -408,26 +408,25 @@ func (s *separatorLoad) tarballCheckSum() error {
return nil
}
- // app image tarball can not be empty
- if len(s.appPath) == 0 {
- return errors.New("app image tarball path can not be empty")
- }
- if err := util.CheckSum(s.appPath, s.info.AppHash); err != nil {
- return errors.Wrapf(err, "check sum for file %q failed", s.appPath)
- }
-
- // base image tarball can not be empty
- if len(s.basePath) == 0 {
- return errors.New("base image tarball path can not be empty")
- }
- if err := util.CheckSum(s.basePath, s.info.BaseHash); err != nil {
- return errors.Wrapf(err, "check sum for file %q failed", s.basePath)
- }
-
- // lib image may be empty image
- if len(s.libPath) != 0 {
- if err := util.CheckSum(s.libPath, s.info.LibHash); err != nil {
- return errors.Wrapf(err, "check sum for file %q failed", s.libPath)
+ type checkInfo struct {
+ path string
+ hash string
+ str string
+ canBeEmpty bool
+ }
+ checkLen := 3
+ var checkList = make([]checkInfo, 0, checkLen)
+ checkList = append(checkList, checkInfo{path: s.basePath, hash: s.info.BaseHash, canBeEmpty: false, str: "base image"})
+ checkList = append(checkList, checkInfo{path: s.libPath, hash: s.info.LibHash, canBeEmpty: true, str: "lib image"})
+ checkList = append(checkList, checkInfo{path: s.appPath, hash: s.info.AppHash, canBeEmpty: false, str: "app image"})
+ for _, p := range checkList {
+ if len(p.path) == 0 && !p.canBeEmpty {
+ return errors.Errorf("%s tarball path can not be empty", p.str)
+ }
+ if len(p.path) != 0 {
+ if err := util.CheckSum(p.path, p.hash); err != nil {
+ return errors.Wrapf(err, "check sum for file %q failed", p.path)
+ }
}
}
@@ -457,18 +456,18 @@ func (s *separatorLoad) unpackTarballs() error {
return errors.Wrap(err, "failed to make temporary directories")
}
- // unpack base first and the later images will be moved here
- if err := util.UnpackFile(s.basePath, s.tmpDir.base, archive.Gzip, false); err != nil {
- return errors.Wrapf(err, "unpack base tarball %q failed", s.basePath)
- }
-
- if err := util.UnpackFile(s.appPath, s.tmpDir.app, archive.Gzip, false); err != nil {
- return errors.Wrapf(err, "unpack app tarball %q failed", s.appPath)
- }
+ type unpackInfo struct{ path, dir, str string }
+ unpackLen := 3
+ var unpackList = make([]unpackInfo, 0, unpackLen)
+ unpackList = append(unpackList, unpackInfo{path: s.basePath, dir: s.tmpDir.base, str: "base image"})
+ unpackList = append(unpackList, unpackInfo{path: s.appPath, dir: s.tmpDir.app, str: "app image"})
+ unpackList = append(unpackList, unpackInfo{path: s.libPath, dir: s.tmpDir.lib, str: "lib image"})
- if len(s.libPath) != 0 {
- if err := util.UnpackFile(s.libPath, s.tmpDir.lib, archive.Gzip, false); err != nil {
- return errors.Wrapf(err, "unpack lib tarball %q failed", s.libPath)
+ for _, p := range unpackList {
+ if len(p.path) != 0 {
+ if err := util.UnpackFile(p.path, p.dir, archive.Gzip, false); err != nil {
+ return errors.Wrapf(err, "unpack %s tarball %q failed", p.str, p.path)
+ }
}
}
diff --git a/daemon/save.go b/daemon/save.go
index f14a485..7a110bd 100644
--- a/daemon/save.go
+++ b/daemon/save.go
@@ -77,9 +77,9 @@ type saveOptions struct {
}
type separatorSave struct {
+ log *logrus.Entry
renameData []renames
tmpDir imageTmpDir
- log *logrus.Entry
base string
lib string
dest string
@@ -190,7 +190,7 @@ func (b *Backend) Save(req *pb.SaveRequest, stream pb.Control_SaveServer) (err e
}).Info("SaveRequest received")
opts := b.getSaveOptions(req)
- if err = opts.check(); err != nil {
+ if err = opts.manage(); err != nil {
return errors.Wrap(err, "check save options failed")
}
@@ -278,17 +278,17 @@ func messageHandler(stream pb.Control_SaveServer, cliLogger *logger.Logger) func
}
}
-func (opts *saveOptions) check() error {
+func (opts *saveOptions) manage() error {
if err := opts.checkImageNameIsID(); err != nil {
return err
}
- if err := opts.checkFormat(); err != nil {
+ if err := opts.setFormat(); err != nil {
return err
}
if err := opts.filterImageName(); err != nil {
return err
}
- if err := opts.checkRenameFile(); err != nil {
+ if err := opts.loadRenameFile(); err != nil {
return err
}
@@ -318,7 +318,7 @@ func (opts *saveOptions) checkImageNameIsID() error {
return nil
}
-func (opts *saveOptions) checkFormat() error {
+func (opts *saveOptions) setFormat() error {
switch opts.format {
case constant.DockerTransport:
opts.format = constant.DockerArchiveTransport
@@ -337,7 +337,7 @@ func (opts *saveOptions) filterImageName() error {
return nil
}
- visitedImage := make(map[string]bool)
+ visitedImage := make(map[string]bool, 1)
for _, imageName := range opts.oriImgList {
if _, exists := visitedImage[imageName]; exists {
continue
@@ -351,8 +351,7 @@ func (opts *saveOptions) filterImageName() error {
finalImage, ok := opts.finalImageSet[img.ID]
if !ok {
- finalImage = &savedImage{exist: true}
- finalImage.tags = []reference.NamedTagged{}
+ finalImage = &savedImage{exist: true, tags: []reference.NamedTagged{}}
opts.finalImageOrdered = append(opts.finalImageOrdered, img.ID)
}
@@ -369,7 +368,7 @@ func (opts *saveOptions) filterImageName() error {
return nil
}
-func (opts *saveOptions) checkRenameFile() error {
+func (opts *saveOptions) loadRenameFile() error {
if len(opts.sep.renameFile) != 0 {
var reName []renames
if err := util.LoadJSONFile(opts.sep.renameFile, &reName); err != nil {
@@ -494,12 +493,11 @@ func (s *separatorSave) adjustLayers() ([]imageManifest, error) {
return man, nil
}
-func separateImage(opt saveOptions) error {
+func separateImage(opt saveOptions) (err error) {
s := &opt.sep
s.log.Infof("Start saving separated images %v", opt.oriImgList)
- var errList []error
- if err := os.MkdirAll(s.dest, constant.DefaultRootDirMode); err != nil {
+ if err = os.MkdirAll(s.dest, constant.DefaultRootDirMode); err != nil {
return err
}
@@ -507,30 +505,26 @@ func separateImage(opt saveOptions) error {
if tErr := os.RemoveAll(s.tmpDir.root); tErr != nil && !os.IsNotExist(tErr) {
s.log.Warnf("Removing save tmp directory %q failed: %v", s.tmpDir.root, tErr)
}
- if len(errList) != 0 {
+ if err != nil {
if rErr := os.RemoveAll(s.dest); rErr != nil && !os.IsNotExist(rErr) {
s.log.Warnf("Removing save dest directory %q failed: %v", s.dest, rErr)
}
}
}()
- if err := util.UnpackFile(opt.outputPath, s.tmpDir.untar, archive.Gzip, true); err != nil {
- errList = append(errList, err)
+ if err = util.UnpackFile(opt.outputPath, s.tmpDir.untar, archive.Gzip, true); err != nil {
return errors.Wrapf(err, "unpack %q failed", opt.outputPath)
}
- manifest, err := s.adjustLayers()
- if err != nil {
- errList = append(errList, err)
- return errors.Wrap(err, "adjust layers failed")
+ manifest, aErr := s.adjustLayers()
+ if aErr != nil {
+ return errors.Wrap(aErr, "adjust layers failed")
}
- imgInfos, err := s.constructImageInfos(manifest, opt.localStore)
- if err != nil {
- errList = append(errList, err)
- return errors.Wrap(err, "process image infos failed")
+ imgInfos, cErr := s.constructImageInfos(manifest, opt.localStore)
+ if cErr != nil {
+ return errors.Wrap(cErr, "process image infos failed")
}
- if err := s.processImageLayers(imgInfos); err != nil {
- errList = append(errList, err)
+ if err = s.processImageLayers(imgInfos); err != nil {
return err
}
@@ -552,7 +546,7 @@ func (s *separatorSave) processImageLayers(imgInfos map[string]imageInfo) error
sort.Strings(sortedKey)
for _, k := range sortedKey {
info := imgInfos[k]
- if err := s.clearDirs(true); err != nil {
+ if err := s.clearTempDirs(); err != nil {
return errors.Wrap(err, "clear tmp dirs failed")
}
var t tarballInfo
@@ -584,32 +578,13 @@ func (s *separatorSave) processImageLayers(imgInfos map[string]imageInfo) error
return nil
}
-func (s *separatorSave) clearDirs(reCreate bool) error {
- tmpDir := s.tmpDir
- dirs := []string{tmpDir.base, tmpDir.app, tmpDir.lib}
- var mkTmpDirs = func(dirs []string) error {
- for _, dir := range dirs {
- if err := os.MkdirAll(dir, constant.DefaultRootDirMode); err != nil {
- return err
- }
- }
- return nil
- }
-
- var rmTmpDirs = func(dirs []string) error {
- for _, dir := range dirs {
- if err := os.RemoveAll(dir); err != nil {
- return err
- }
+func (s *separatorSave) clearTempDirs() error {
+ dirs := []string{s.tmpDir.base, s.tmpDir.app, s.tmpDir.lib}
+ for _, dir := range dirs {
+ if err := os.RemoveAll(dir); err != nil {
+ return err
}
- return nil
- }
-
- if err := rmTmpDirs(dirs); err != nil {
- return err
- }
- if reCreate {
- if err := mkTmpDirs(dirs); err != nil {
+ if err := os.MkdirAll(dir, constant.DefaultRootDirMode); err != nil {
return err
}
}
diff --git a/image/image.go b/image/image.go
index b24cb41..37cd7fa 100644
--- a/image/image.go
+++ b/image/image.go
@@ -626,7 +626,8 @@ func GetNamedTaggedReference(image string) (reference.NamedTagged, string, error
return nil, "", nil
}
- if slashLastIndex, sepLastIndex := strings.LastIndex(image, "/"), strings.LastIndex(image, ":"); sepLastIndex == -1 || (sepLastIndex < slashLastIndex) {
+ slashLastIndex, sepLastIndex := strings.LastIndex(image, "/"), strings.LastIndex(image, ":")
+ if sepLastIndex == -1 || (sepLastIndex < slashLastIndex) {
image = fmt.Sprintf("%s:%s", image, constant.DefaultTag)
}
diff --git a/util/cipher.go b/util/cipher.go
index a5e3125..67cb52b 100644
--- a/util/cipher.go
+++ b/util/cipher.go
@@ -212,6 +212,9 @@ func GenRSAPublicKeyFile(key *rsa.PrivateKey, path string) error {
if err := pem.Encode(file, block); err != nil {
return err
}
+ if cErr := file.Close(); cErr != nil {
+ return cErr
+ }
return nil
}
@@ -230,7 +233,10 @@ func ReadPublicKey(path string) (rsa.PublicKey, error) {
if err != nil {
return rsa.PublicKey{}, err
}
- key := pubInterface.(*rsa.PublicKey)
+ key, ok := pubInterface.(*rsa.PublicKey)
+ if !ok {
+ return rsa.PublicKey{}, errors.New("failed to find public key type")
+ }
return *key, nil
}
--
1.8.3.1

View File

@ -54,3 +54,9 @@ patch/0088-bugfix-loaded-images-cover-existing-images-name-and-.patch
patch/0089-isula-build-fix-panic-when-using-image-ID-to-save-se.patch
patch/0090-enhancement-add-log-info-to-show-the-image-layer-num.patch
patch/0091-add-repo-to-local-image-when-output-transporter-is-d.patch
patch/0092-update-README.md.patch
patch/0093-perf-use-bufio-reader-instead-ioutil.ReadFile.patch
patch/0094-doc-add-documents-for-separated-relative-feature.patch
patch/0095-tests-fixes-make-test-unit-cover-not-generates-cover.patch
patch/0096-test-fix-go-test-failed-but-show-success.patch
patch/0097-isula-build-fix-problems-found-by-code-review.patch